00:00:00.001 Started by upstream project "autotest-nightly" build number 3795 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3175 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:13.775 The recommended git tool is: git 00:00:13.775 using credential 00000000-0000-0000-0000-000000000002 00:00:13.777 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:13.788 Fetching changes from the remote Git repository 00:00:13.790 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:13.801 Using shallow fetch with depth 1 00:00:13.801 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:13.801 > git --version # timeout=10 00:00:13.812 > git --version # 'git version 2.39.2' 00:00:13.812 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:13.826 Setting http proxy: proxy-dmz.intel.com:911 00:00:13.826 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:20.157 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:20.172 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:20.187 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:20.187 > git config core.sparsecheckout # timeout=10 00:00:20.201 > git read-tree -mu HEAD # timeout=10 00:00:20.221 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:20.248 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:20.249 > git rev-list --no-walk fcd93e2ba68418fb72075306675cd28d3d4f53d6 # timeout=10 00:00:20.361 [Pipeline] Start of Pipeline 00:00:20.377 [Pipeline] library 00:00:20.379 Loading library shm_lib@master 00:00:20.379 Library shm_lib@master is cached. Copying from home. 00:00:20.394 [Pipeline] node 00:00:20.401 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:20.403 [Pipeline] { 00:00:20.410 [Pipeline] catchError 00:00:20.411 [Pipeline] { 00:00:20.420 [Pipeline] wrap 00:00:20.427 [Pipeline] { 00:00:20.437 [Pipeline] stage 00:00:20.439 [Pipeline] { (Prologue) 00:00:20.622 [Pipeline] sh 00:00:20.919 + logger -p user.info -t JENKINS-CI 00:00:20.943 [Pipeline] echo 00:00:20.944 Node: WFP22 00:00:20.950 [Pipeline] sh 00:00:21.247 [Pipeline] setCustomBuildProperty 00:00:21.256 [Pipeline] echo 00:00:21.257 Cleanup processes 00:00:21.261 [Pipeline] sh 00:00:21.544 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:21.544 1110868 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:21.558 [Pipeline] sh 00:00:21.837 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:21.837 ++ grep -v 'sudo pgrep' 00:00:21.837 ++ awk '{print $1}' 00:00:21.837 + sudo kill -9 00:00:21.837 + true 00:00:21.851 [Pipeline] cleanWs 00:00:21.860 [WS-CLEANUP] Deleting project workspace... 00:00:21.860 [WS-CLEANUP] Deferred wipeout is used... 00:00:21.867 [WS-CLEANUP] done 00:00:21.871 [Pipeline] setCustomBuildProperty 00:00:21.886 [Pipeline] sh 00:00:22.164 + sudo git config --global --replace-all safe.directory '*' 00:00:22.241 [Pipeline] nodesByLabel 00:00:22.242 Found a total of 2 nodes with the 'sorcerer' label 00:00:22.253 [Pipeline] httpRequest 00:00:22.258 HttpMethod: GET 00:00:22.259 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:22.265 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:22.272 Response Code: HTTP/1.1 200 OK 00:00:22.272 Success: Status code 200 is in the accepted range: 200,404 00:00:22.273 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:31.823 [Pipeline] sh 00:00:32.108 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:32.127 [Pipeline] httpRequest 00:00:32.132 HttpMethod: GET 00:00:32.133 URL: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:32.133 Sending request to url: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:32.140 Response Code: HTTP/1.1 200 OK 00:00:32.140 Success: Status code 200 is in the accepted range: 200,404 00:00:32.141 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:03:10.230 [Pipeline] sh 00:03:10.519 + tar --no-same-owner -xf spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:03:13.822 [Pipeline] sh 00:03:14.101 + git -C spdk log --oneline -n5 00:03:14.101 9ccef4907 nvme/tcp: fix seq failure handling 00:03:14.101 2a268d7a6 nvme/tcp: move logic from safe ver of req complete 00:03:14.101 8531a41f9 nvme/tcp: add util to cond schedule qpair poll 00:03:14.101 b10f50b08 scripts/pkgdep: Add pkg-config package to {rhel,debian}-based distros 00:03:14.101 89d49f772 pkgdep/debian: Handle PEP 668 00:03:14.112 [Pipeline] } 00:03:14.129 [Pipeline] // stage 00:03:14.138 [Pipeline] stage 00:03:14.140 [Pipeline] { (Prepare) 00:03:14.153 [Pipeline] writeFile 00:03:14.166 [Pipeline] sh 00:03:14.449 + logger -p user.info -t JENKINS-CI 00:03:14.460 [Pipeline] sh 00:03:14.737 + logger -p user.info -t JENKINS-CI 00:03:14.751 [Pipeline] sh 00:03:15.034 + cat autorun-spdk.conf 00:03:15.034 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.034 SPDK_TEST_NVMF=1 00:03:15.034 SPDK_TEST_NVME_CLI=1 00:03:15.034 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.034 SPDK_TEST_NVMF_NICS=e810 00:03:15.034 SPDK_RUN_UBSAN=1 00:03:15.034 NET_TYPE=phy 00:03:15.041 RUN_NIGHTLY=1 00:03:15.045 [Pipeline] readFile 00:03:15.068 [Pipeline] withEnv 00:03:15.070 [Pipeline] { 00:03:15.084 [Pipeline] sh 00:03:15.368 + set -ex 00:03:15.368 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:15.368 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:15.368 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:15.368 ++ SPDK_TEST_NVMF=1 00:03:15.368 ++ SPDK_TEST_NVME_CLI=1 00:03:15.368 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:15.368 ++ SPDK_TEST_NVMF_NICS=e810 00:03:15.368 ++ SPDK_RUN_UBSAN=1 00:03:15.368 ++ NET_TYPE=phy 00:03:15.368 ++ RUN_NIGHTLY=1 00:03:15.368 + case $SPDK_TEST_NVMF_NICS in 00:03:15.368 + DRIVERS=ice 00:03:15.368 + [[ tcp == \r\d\m\a ]] 00:03:15.368 + [[ -n ice ]] 00:03:15.368 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:15.368 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:15.368 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:15.368 rmmod: ERROR: Module irdma is not currently loaded 00:03:15.368 rmmod: ERROR: Module i40iw is not currently loaded 00:03:15.368 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:15.368 + true 00:03:15.368 + for D in $DRIVERS 00:03:15.368 + sudo modprobe ice 00:03:15.368 + exit 0 00:03:15.377 [Pipeline] } 00:03:15.395 [Pipeline] // withEnv 00:03:15.401 [Pipeline] } 00:03:15.422 [Pipeline] // stage 00:03:15.429 [Pipeline] catchError 00:03:15.430 [Pipeline] { 00:03:15.445 [Pipeline] timeout 00:03:15.445 Timeout set to expire in 50 min 00:03:15.448 [Pipeline] { 00:03:15.463 [Pipeline] stage 00:03:15.465 [Pipeline] { (Tests) 00:03:15.476 [Pipeline] sh 00:03:15.756 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.756 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.756 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.756 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:15.756 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.756 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:15.756 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:15.756 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:15.756 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:15.756 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:15.756 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:15.756 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.756 + source /etc/os-release 00:03:15.756 ++ NAME='Fedora Linux' 00:03:15.756 ++ VERSION='38 (Cloud Edition)' 00:03:15.756 ++ ID=fedora 00:03:15.756 ++ VERSION_ID=38 00:03:15.756 ++ VERSION_CODENAME= 00:03:15.756 ++ PLATFORM_ID=platform:f38 00:03:15.756 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:15.757 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:15.757 ++ LOGO=fedora-logo-icon 00:03:15.757 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:15.757 ++ HOME_URL=https://fedoraproject.org/ 00:03:15.757 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:15.757 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:15.757 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:15.757 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:15.757 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:15.757 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:15.757 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:15.757 ++ SUPPORT_END=2024-05-14 00:03:15.757 ++ VARIANT='Cloud Edition' 00:03:15.757 ++ VARIANT_ID=cloud 00:03:15.757 + uname -a 00:03:15.757 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:15.757 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.048 Hugepages 00:03:19.048 node hugesize free / total 00:03:19.048 node0 1048576kB 0 / 0 00:03:19.048 node0 2048kB 0 / 0 00:03:19.048 node1 1048576kB 0 / 0 00:03:19.048 node1 2048kB 0 / 0 00:03:19.048 00:03:19.048 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.048 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:19.048 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:19.048 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.048 + rm -f /tmp/spdk-ld-path 00:03:19.048 + source autorun-spdk.conf 00:03:19.048 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.048 ++ SPDK_TEST_NVMF=1 00:03:19.048 ++ SPDK_TEST_NVME_CLI=1 00:03:19.048 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.048 ++ SPDK_TEST_NVMF_NICS=e810 00:03:19.048 ++ SPDK_RUN_UBSAN=1 00:03:19.048 ++ NET_TYPE=phy 00:03:19.048 ++ RUN_NIGHTLY=1 00:03:19.048 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:19.048 + [[ -n '' ]] 00:03:19.048 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:19.048 + for M in /var/spdk/build-*-manifest.txt 00:03:19.048 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:19.048 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.048 + for M in /var/spdk/build-*-manifest.txt 00:03:19.048 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:19.048 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:19.048 ++ uname 00:03:19.049 + [[ Linux == \L\i\n\u\x ]] 00:03:19.049 + sudo dmesg -T 00:03:19.049 + sudo dmesg --clear 00:03:19.049 + dmesg_pid=1112348 00:03:19.049 + [[ Fedora Linux == FreeBSD ]] 00:03:19.049 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:19.049 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:19.049 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:19.049 + [[ -x /usr/src/fio-static/fio ]] 00:03:19.049 + export FIO_BIN=/usr/src/fio-static/fio 00:03:19.049 + FIO_BIN=/usr/src/fio-static/fio 00:03:19.049 + sudo dmesg -Tw 00:03:19.049 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:19.049 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:19.049 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:19.049 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:19.049 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:19.049 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:19.049 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:19.049 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:19.049 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:19.049 Test configuration: 00:03:19.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:19.049 SPDK_TEST_NVMF=1 00:03:19.049 SPDK_TEST_NVME_CLI=1 00:03:19.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:19.049 SPDK_TEST_NVMF_NICS=e810 00:03:19.049 SPDK_RUN_UBSAN=1 00:03:19.049 NET_TYPE=phy 00:03:19.049 RUN_NIGHTLY=1 13:31:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:19.049 13:31:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:19.049 13:31:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.049 13:31:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.049 13:31:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.049 13:31:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.049 13:31:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.049 13:31:11 -- paths/export.sh@5 -- $ export PATH 00:03:19.049 13:31:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.049 13:31:11 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:19.049 13:31:11 -- common/autobuild_common.sh@437 -- $ date +%s 00:03:19.049 13:31:11 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718105471.XXXXXX 00:03:19.049 13:31:11 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718105471.icQgiC 00:03:19.049 13:31:11 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:03:19.049 13:31:11 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:03:19.049 13:31:11 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:19.049 13:31:11 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:19.049 13:31:11 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:19.049 13:31:11 -- common/autobuild_common.sh@453 -- $ get_config_params 00:03:19.049 13:31:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:03:19.049 13:31:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.049 13:31:11 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:19.049 13:31:11 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:03:19.049 13:31:11 -- pm/common@17 -- $ local monitor 00:03:19.049 13:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.049 13:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.049 13:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.049 13:31:11 -- pm/common@21 -- $ date +%s 00:03:19.049 13:31:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.049 13:31:11 -- pm/common@21 -- $ date +%s 00:03:19.049 13:31:11 -- pm/common@25 -- $ sleep 1 00:03:19.049 13:31:11 -- pm/common@21 -- $ date +%s 00:03:19.049 13:31:11 -- pm/common@21 -- $ date +%s 00:03:19.049 13:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105471 00:03:19.049 13:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105471 00:03:19.308 13:31:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105471 00:03:19.308 13:31:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105471 00:03:19.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105471_collect-vmstat.pm.log 00:03:19.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105471_collect-cpu-load.pm.log 00:03:19.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105471_collect-cpu-temp.pm.log 00:03:19.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105471_collect-bmc-pm.bmc.pm.log 00:03:20.245 13:31:12 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:03:20.245 13:31:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:20.245 13:31:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:20.245 13:31:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.245 13:31:12 -- spdk/autobuild.sh@16 -- $ date -u 00:03:20.245 Tue Jun 11 11:31:12 AM UTC 2024 00:03:20.245 13:31:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:20.245 v24.09-pre-65-g9ccef4907 00:03:20.245 13:31:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:20.245 13:31:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:20.245 13:31:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:20.245 13:31:12 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:20.245 13:31:12 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:20.245 13:31:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.245 ************************************ 00:03:20.245 START TEST ubsan 00:03:20.245 ************************************ 00:03:20.245 13:31:13 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:03:20.245 using ubsan 00:03:20.245 00:03:20.245 real 0m0.001s 00:03:20.245 user 0m0.001s 00:03:20.245 sys 0m0.000s 00:03:20.245 13:31:13 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:20.245 13:31:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:20.245 ************************************ 00:03:20.245 END TEST ubsan 00:03:20.245 ************************************ 00:03:20.245 13:31:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:20.245 13:31:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:20.245 13:31:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:20.245 13:31:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:20.504 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:20.504 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:20.763 Using 'verbs' RDMA provider 00:03:36.655 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:51.532 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:51.532 Creating mk/config.mk...done. 00:03:51.532 Creating mk/cc.flags.mk...done. 00:03:51.532 Type 'make' to build. 00:03:51.532 13:31:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:03:51.532 13:31:42 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:51.532 13:31:42 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:51.532 13:31:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.532 ************************************ 00:03:51.532 START TEST make 00:03:51.532 ************************************ 00:03:51.532 13:31:43 make -- common/autotest_common.sh@1124 -- $ make -j112 00:03:51.532 make[1]: Nothing to be done for 'all'. 00:03:59.652 The Meson build system 00:03:59.652 Version: 1.3.1 00:03:59.652 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:59.652 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:59.652 Build type: native build 00:03:59.652 Program cat found: YES (/usr/bin/cat) 00:03:59.652 Project name: DPDK 00:03:59.652 Project version: 24.03.0 00:03:59.652 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:59.652 C linker for the host machine: cc ld.bfd 2.39-16 00:03:59.652 Host machine cpu family: x86_64 00:03:59.652 Host machine cpu: x86_64 00:03:59.652 Message: ## Building in Developer Mode ## 00:03:59.652 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:59.652 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:59.652 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:59.652 Program python3 found: YES (/usr/bin/python3) 00:03:59.652 Program cat found: YES (/usr/bin/cat) 00:03:59.652 Compiler for C supports arguments -march=native: YES 00:03:59.652 Checking for size of "void *" : 8 00:03:59.652 Checking for size of "void *" : 8 (cached) 00:03:59.652 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:59.652 Library m found: YES 00:03:59.652 Library numa found: YES 00:03:59.652 Has header "numaif.h" : YES 00:03:59.652 Library fdt found: NO 00:03:59.652 Library execinfo found: NO 00:03:59.652 Has header "execinfo.h" : YES 00:03:59.652 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:59.652 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:59.652 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:59.652 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:59.652 Run-time dependency openssl found: YES 3.0.9 00:03:59.652 Run-time dependency libpcap found: YES 1.10.4 00:03:59.652 Has header "pcap.h" with dependency libpcap: YES 00:03:59.652 Compiler for C supports arguments -Wcast-qual: YES 00:03:59.652 Compiler for C supports arguments -Wdeprecated: YES 00:03:59.652 Compiler for C supports arguments -Wformat: YES 00:03:59.652 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:59.652 Compiler for C supports arguments -Wformat-security: NO 00:03:59.652 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:59.652 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:59.652 Compiler for C supports arguments -Wnested-externs: YES 00:03:59.652 Compiler for C supports arguments -Wold-style-definition: YES 00:03:59.652 Compiler for C supports arguments -Wpointer-arith: YES 00:03:59.652 Compiler for C supports arguments -Wsign-compare: YES 00:03:59.652 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:59.652 Compiler for C supports arguments -Wundef: YES 00:03:59.652 Compiler for C supports arguments -Wwrite-strings: YES 00:03:59.652 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:59.652 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:59.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:59.652 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:59.652 Program objdump found: YES (/usr/bin/objdump) 00:03:59.652 Compiler for C supports arguments -mavx512f: YES 00:03:59.652 Checking if "AVX512 checking" compiles: YES 00:03:59.652 Fetching value of define "__SSE4_2__" : 1 00:03:59.652 Fetching value of define "__AES__" : 1 00:03:59.652 Fetching value of define "__AVX__" : 1 00:03:59.652 Fetching value of define "__AVX2__" : 1 00:03:59.652 Fetching value of define "__AVX512BW__" : 1 00:03:59.652 Fetching value of define "__AVX512CD__" : 1 00:03:59.652 Fetching value of define "__AVX512DQ__" : 1 00:03:59.652 Fetching value of define "__AVX512F__" : 1 00:03:59.652 Fetching value of define "__AVX512VL__" : 1 00:03:59.652 Fetching value of define "__PCLMUL__" : 1 00:03:59.652 Fetching value of define "__RDRND__" : 1 00:03:59.652 Fetching value of define "__RDSEED__" : 1 00:03:59.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:59.652 Fetching value of define "__znver1__" : (undefined) 00:03:59.652 Fetching value of define "__znver2__" : (undefined) 00:03:59.652 Fetching value of define "__znver3__" : (undefined) 00:03:59.652 Fetching value of define "__znver4__" : (undefined) 00:03:59.652 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:59.652 Message: lib/log: Defining dependency "log" 00:03:59.652 Message: lib/kvargs: Defining dependency "kvargs" 00:03:59.652 Message: lib/telemetry: Defining dependency "telemetry" 00:03:59.652 Checking for function "getentropy" : NO 00:03:59.652 Message: lib/eal: Defining dependency "eal" 00:03:59.652 Message: lib/ring: Defining dependency "ring" 00:03:59.652 Message: lib/rcu: Defining dependency "rcu" 00:03:59.652 Message: lib/mempool: Defining dependency "mempool" 00:03:59.652 Message: lib/mbuf: Defining dependency "mbuf" 00:03:59.652 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:59.652 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:59.652 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:59.652 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:59.652 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:59.652 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:59.652 Compiler for C supports arguments -mpclmul: YES 00:03:59.652 Compiler for C supports arguments -maes: YES 00:03:59.652 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:59.652 Compiler for C supports arguments -mavx512bw: YES 00:03:59.652 Compiler for C supports arguments -mavx512dq: YES 00:03:59.652 Compiler for C supports arguments -mavx512vl: YES 00:03:59.652 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:59.652 Compiler for C supports arguments -mavx2: YES 00:03:59.652 Compiler for C supports arguments -mavx: YES 00:03:59.652 Message: lib/net: Defining dependency "net" 00:03:59.652 Message: lib/meter: Defining dependency "meter" 00:03:59.652 Message: lib/ethdev: Defining dependency "ethdev" 00:03:59.652 Message: lib/pci: Defining dependency "pci" 00:03:59.652 Message: lib/cmdline: Defining dependency "cmdline" 00:03:59.652 Message: lib/hash: Defining dependency "hash" 00:03:59.652 Message: lib/timer: Defining dependency "timer" 00:03:59.652 Message: lib/compressdev: Defining dependency "compressdev" 00:03:59.652 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:59.652 Message: lib/dmadev: Defining dependency "dmadev" 00:03:59.652 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:59.652 Message: lib/power: Defining dependency "power" 00:03:59.652 Message: lib/reorder: Defining dependency "reorder" 00:03:59.652 Message: lib/security: Defining dependency "security" 00:03:59.652 Has header "linux/userfaultfd.h" : YES 00:03:59.652 Has header "linux/vduse.h" : YES 00:03:59.652 Message: lib/vhost: Defining dependency "vhost" 00:03:59.652 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:59.652 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:59.652 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:59.652 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:59.652 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:59.652 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:59.652 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:59.652 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:59.652 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:59.652 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:59.652 Program doxygen found: YES (/usr/bin/doxygen) 00:03:59.652 Configuring doxy-api-html.conf using configuration 00:03:59.652 Configuring doxy-api-man.conf using configuration 00:03:59.652 Program mandb found: YES (/usr/bin/mandb) 00:03:59.652 Program sphinx-build found: NO 00:03:59.652 Configuring rte_build_config.h using configuration 00:03:59.652 Message: 00:03:59.652 ================= 00:03:59.652 Applications Enabled 00:03:59.652 ================= 00:03:59.652 00:03:59.652 apps: 00:03:59.652 00:03:59.652 00:03:59.652 Message: 00:03:59.652 ================= 00:03:59.652 Libraries Enabled 00:03:59.652 ================= 00:03:59.652 00:03:59.652 libs: 00:03:59.652 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:59.652 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:59.652 cryptodev, dmadev, power, reorder, security, vhost, 00:03:59.652 00:03:59.652 Message: 00:03:59.652 =============== 00:03:59.652 Drivers Enabled 00:03:59.652 =============== 00:03:59.652 00:03:59.652 common: 00:03:59.652 00:03:59.652 bus: 00:03:59.652 pci, vdev, 00:03:59.652 mempool: 00:03:59.652 ring, 00:03:59.652 dma: 00:03:59.652 00:03:59.652 net: 00:03:59.652 00:03:59.652 crypto: 00:03:59.652 00:03:59.652 compress: 00:03:59.652 00:03:59.652 vdpa: 00:03:59.652 00:03:59.652 00:03:59.652 Message: 00:03:59.652 ================= 00:03:59.652 Content Skipped 00:03:59.652 ================= 00:03:59.652 00:03:59.652 apps: 00:03:59.652 dumpcap: explicitly disabled via build config 00:03:59.652 graph: explicitly disabled via build config 00:03:59.652 pdump: explicitly disabled via build config 00:03:59.652 proc-info: explicitly disabled via build config 00:03:59.652 test-acl: explicitly disabled via build config 00:03:59.652 test-bbdev: explicitly disabled via build config 00:03:59.652 test-cmdline: explicitly disabled via build config 00:03:59.652 test-compress-perf: explicitly disabled via build config 00:03:59.652 test-crypto-perf: explicitly disabled via build config 00:03:59.652 test-dma-perf: explicitly disabled via build config 00:03:59.652 test-eventdev: explicitly disabled via build config 00:03:59.652 test-fib: explicitly disabled via build config 00:03:59.653 test-flow-perf: explicitly disabled via build config 00:03:59.653 test-gpudev: explicitly disabled via build config 00:03:59.653 test-mldev: explicitly disabled via build config 00:03:59.653 test-pipeline: explicitly disabled via build config 00:03:59.653 test-pmd: explicitly disabled via build config 00:03:59.653 test-regex: explicitly disabled via build config 00:03:59.653 test-sad: explicitly disabled via build config 00:03:59.653 test-security-perf: explicitly disabled via build config 00:03:59.653 00:03:59.653 libs: 00:03:59.653 argparse: explicitly disabled via build config 00:03:59.653 metrics: explicitly disabled via build config 00:03:59.653 acl: explicitly disabled via build config 00:03:59.653 bbdev: explicitly disabled via build config 00:03:59.653 bitratestats: explicitly disabled via build config 00:03:59.653 bpf: explicitly disabled via build config 00:03:59.653 cfgfile: explicitly disabled via build config 00:03:59.653 distributor: explicitly disabled via build config 00:03:59.653 efd: explicitly disabled via build config 00:03:59.653 eventdev: explicitly disabled via build config 00:03:59.653 dispatcher: explicitly disabled via build config 00:03:59.653 gpudev: explicitly disabled via build config 00:03:59.653 gro: explicitly disabled via build config 00:03:59.653 gso: explicitly disabled via build config 00:03:59.653 ip_frag: explicitly disabled via build config 00:03:59.653 jobstats: explicitly disabled via build config 00:03:59.653 latencystats: explicitly disabled via build config 00:03:59.653 lpm: explicitly disabled via build config 00:03:59.653 member: explicitly disabled via build config 00:03:59.653 pcapng: explicitly disabled via build config 00:03:59.653 rawdev: explicitly disabled via build config 00:03:59.653 regexdev: explicitly disabled via build config 00:03:59.653 mldev: explicitly disabled via build config 00:03:59.653 rib: explicitly disabled via build config 00:03:59.653 sched: explicitly disabled via build config 00:03:59.653 stack: explicitly disabled via build config 00:03:59.653 ipsec: explicitly disabled via build config 00:03:59.653 pdcp: explicitly disabled via build config 00:03:59.653 fib: explicitly disabled via build config 00:03:59.653 port: explicitly disabled via build config 00:03:59.653 pdump: explicitly disabled via build config 00:03:59.653 table: explicitly disabled via build config 00:03:59.653 pipeline: explicitly disabled via build config 00:03:59.653 graph: explicitly disabled via build config 00:03:59.653 node: explicitly disabled via build config 00:03:59.653 00:03:59.653 drivers: 00:03:59.653 common/cpt: not in enabled drivers build config 00:03:59.653 common/dpaax: not in enabled drivers build config 00:03:59.653 common/iavf: not in enabled drivers build config 00:03:59.653 common/idpf: not in enabled drivers build config 00:03:59.653 common/ionic: not in enabled drivers build config 00:03:59.653 common/mvep: not in enabled drivers build config 00:03:59.653 common/octeontx: not in enabled drivers build config 00:03:59.653 bus/auxiliary: not in enabled drivers build config 00:03:59.653 bus/cdx: not in enabled drivers build config 00:03:59.653 bus/dpaa: not in enabled drivers build config 00:03:59.653 bus/fslmc: not in enabled drivers build config 00:03:59.653 bus/ifpga: not in enabled drivers build config 00:03:59.653 bus/platform: not in enabled drivers build config 00:03:59.653 bus/uacce: not in enabled drivers build config 00:03:59.653 bus/vmbus: not in enabled drivers build config 00:03:59.653 common/cnxk: not in enabled drivers build config 00:03:59.653 common/mlx5: not in enabled drivers build config 00:03:59.653 common/nfp: not in enabled drivers build config 00:03:59.653 common/nitrox: not in enabled drivers build config 00:03:59.653 common/qat: not in enabled drivers build config 00:03:59.653 common/sfc_efx: not in enabled drivers build config 00:03:59.653 mempool/bucket: not in enabled drivers build config 00:03:59.653 mempool/cnxk: not in enabled drivers build config 00:03:59.653 mempool/dpaa: not in enabled drivers build config 00:03:59.653 mempool/dpaa2: not in enabled drivers build config 00:03:59.653 mempool/octeontx: not in enabled drivers build config 00:03:59.653 mempool/stack: not in enabled drivers build config 00:03:59.653 dma/cnxk: not in enabled drivers build config 00:03:59.653 dma/dpaa: not in enabled drivers build config 00:03:59.653 dma/dpaa2: not in enabled drivers build config 00:03:59.653 dma/hisilicon: not in enabled drivers build config 00:03:59.653 dma/idxd: not in enabled drivers build config 00:03:59.653 dma/ioat: not in enabled drivers build config 00:03:59.653 dma/skeleton: not in enabled drivers build config 00:03:59.653 net/af_packet: not in enabled drivers build config 00:03:59.653 net/af_xdp: not in enabled drivers build config 00:03:59.653 net/ark: not in enabled drivers build config 00:03:59.653 net/atlantic: not in enabled drivers build config 00:03:59.653 net/avp: not in enabled drivers build config 00:03:59.653 net/axgbe: not in enabled drivers build config 00:03:59.653 net/bnx2x: not in enabled drivers build config 00:03:59.653 net/bnxt: not in enabled drivers build config 00:03:59.653 net/bonding: not in enabled drivers build config 00:03:59.653 net/cnxk: not in enabled drivers build config 00:03:59.653 net/cpfl: not in enabled drivers build config 00:03:59.653 net/cxgbe: not in enabled drivers build config 00:03:59.653 net/dpaa: not in enabled drivers build config 00:03:59.653 net/dpaa2: not in enabled drivers build config 00:03:59.653 net/e1000: not in enabled drivers build config 00:03:59.653 net/ena: not in enabled drivers build config 00:03:59.653 net/enetc: not in enabled drivers build config 00:03:59.653 net/enetfec: not in enabled drivers build config 00:03:59.653 net/enic: not in enabled drivers build config 00:03:59.653 net/failsafe: not in enabled drivers build config 00:03:59.653 net/fm10k: not in enabled drivers build config 00:03:59.653 net/gve: not in enabled drivers build config 00:03:59.653 net/hinic: not in enabled drivers build config 00:03:59.653 net/hns3: not in enabled drivers build config 00:03:59.653 net/i40e: not in enabled drivers build config 00:03:59.653 net/iavf: not in enabled drivers build config 00:03:59.653 net/ice: not in enabled drivers build config 00:03:59.653 net/idpf: not in enabled drivers build config 00:03:59.653 net/igc: not in enabled drivers build config 00:03:59.653 net/ionic: not in enabled drivers build config 00:03:59.653 net/ipn3ke: not in enabled drivers build config 00:03:59.653 net/ixgbe: not in enabled drivers build config 00:03:59.653 net/mana: not in enabled drivers build config 00:03:59.653 net/memif: not in enabled drivers build config 00:03:59.653 net/mlx4: not in enabled drivers build config 00:03:59.653 net/mlx5: not in enabled drivers build config 00:03:59.653 net/mvneta: not in enabled drivers build config 00:03:59.653 net/mvpp2: not in enabled drivers build config 00:03:59.653 net/netvsc: not in enabled drivers build config 00:03:59.653 net/nfb: not in enabled drivers build config 00:03:59.653 net/nfp: not in enabled drivers build config 00:03:59.653 net/ngbe: not in enabled drivers build config 00:03:59.653 net/null: not in enabled drivers build config 00:03:59.653 net/octeontx: not in enabled drivers build config 00:03:59.653 net/octeon_ep: not in enabled drivers build config 00:03:59.653 net/pcap: not in enabled drivers build config 00:03:59.653 net/pfe: not in enabled drivers build config 00:03:59.653 net/qede: not in enabled drivers build config 00:03:59.653 net/ring: not in enabled drivers build config 00:03:59.653 net/sfc: not in enabled drivers build config 00:03:59.653 net/softnic: not in enabled drivers build config 00:03:59.653 net/tap: not in enabled drivers build config 00:03:59.653 net/thunderx: not in enabled drivers build config 00:03:59.653 net/txgbe: not in enabled drivers build config 00:03:59.653 net/vdev_netvsc: not in enabled drivers build config 00:03:59.653 net/vhost: not in enabled drivers build config 00:03:59.653 net/virtio: not in enabled drivers build config 00:03:59.653 net/vmxnet3: not in enabled drivers build config 00:03:59.653 raw/*: missing internal dependency, "rawdev" 00:03:59.653 crypto/armv8: not in enabled drivers build config 00:03:59.653 crypto/bcmfs: not in enabled drivers build config 00:03:59.653 crypto/caam_jr: not in enabled drivers build config 00:03:59.653 crypto/ccp: not in enabled drivers build config 00:03:59.653 crypto/cnxk: not in enabled drivers build config 00:03:59.653 crypto/dpaa_sec: not in enabled drivers build config 00:03:59.653 crypto/dpaa2_sec: not in enabled drivers build config 00:03:59.653 crypto/ipsec_mb: not in enabled drivers build config 00:03:59.653 crypto/mlx5: not in enabled drivers build config 00:03:59.653 crypto/mvsam: not in enabled drivers build config 00:03:59.653 crypto/nitrox: not in enabled drivers build config 00:03:59.653 crypto/null: not in enabled drivers build config 00:03:59.653 crypto/octeontx: not in enabled drivers build config 00:03:59.653 crypto/openssl: not in enabled drivers build config 00:03:59.653 crypto/scheduler: not in enabled drivers build config 00:03:59.653 crypto/uadk: not in enabled drivers build config 00:03:59.653 crypto/virtio: not in enabled drivers build config 00:03:59.653 compress/isal: not in enabled drivers build config 00:03:59.653 compress/mlx5: not in enabled drivers build config 00:03:59.653 compress/nitrox: not in enabled drivers build config 00:03:59.653 compress/octeontx: not in enabled drivers build config 00:03:59.653 compress/zlib: not in enabled drivers build config 00:03:59.653 regex/*: missing internal dependency, "regexdev" 00:03:59.653 ml/*: missing internal dependency, "mldev" 00:03:59.653 vdpa/ifc: not in enabled drivers build config 00:03:59.653 vdpa/mlx5: not in enabled drivers build config 00:03:59.653 vdpa/nfp: not in enabled drivers build config 00:03:59.653 vdpa/sfc: not in enabled drivers build config 00:03:59.653 event/*: missing internal dependency, "eventdev" 00:03:59.653 baseband/*: missing internal dependency, "bbdev" 00:03:59.653 gpu/*: missing internal dependency, "gpudev" 00:03:59.653 00:03:59.653 00:03:59.653 Build targets in project: 85 00:03:59.653 00:03:59.653 DPDK 24.03.0 00:03:59.653 00:03:59.653 User defined options 00:03:59.653 buildtype : debug 00:03:59.653 default_library : shared 00:03:59.653 libdir : lib 00:03:59.653 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:59.653 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:59.653 c_link_args : 00:03:59.653 cpu_instruction_set: native 00:03:59.653 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:59.653 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:59.653 enable_docs : false 00:03:59.653 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:59.653 enable_kmods : false 00:03:59.653 tests : false 00:03:59.653 00:03:59.653 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.653 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:59.653 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:59.653 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:59.917 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:59.917 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:59.917 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:59.917 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:59.917 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:59.917 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:59.917 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:59.917 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:59.917 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:59.917 [12/268] Linking static target lib/librte_kvargs.a 00:03:59.917 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:59.917 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:59.917 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:59.917 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:59.917 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:59.917 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:59.917 [19/268] Linking static target lib/librte_log.a 00:03:59.917 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:59.917 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:59.917 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:00.176 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:00.176 [24/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:00.176 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:00.176 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:00.176 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:00.176 [28/268] Linking static target lib/librte_pci.a 00:04:00.176 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:00.176 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:00.176 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:00.176 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:00.176 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:00.176 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:00.434 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:00.434 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:00.434 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:00.434 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:00.434 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:00.434 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:00.434 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:00.434 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:00.434 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:00.434 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:00.434 [45/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:00.434 [46/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:00.434 [47/268] Linking static target lib/librte_meter.a 00:04:00.434 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:00.434 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:00.434 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:00.434 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:00.434 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:00.434 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:00.434 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:00.434 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:00.434 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:00.434 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:00.434 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:00.434 [59/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.434 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:00.434 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:00.434 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:00.434 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:00.434 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:00.434 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:00.434 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:00.434 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:00.434 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:00.434 [69/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:00.434 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:00.434 [71/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:00.434 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:00.434 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:00.434 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:00.434 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:00.434 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:00.434 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:00.434 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:00.434 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:00.434 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:00.434 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:00.434 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:00.434 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:00.434 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:00.434 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:00.434 [86/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.434 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:00.434 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:00.434 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:00.434 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:00.434 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:00.434 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:00.434 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:00.434 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:00.434 [95/268] Linking static target lib/librte_telemetry.a 00:04:00.692 [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:00.692 [97/268] Linking static target lib/librte_ring.a 00:04:00.692 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:00.692 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:00.692 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:00.692 [101/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:00.692 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:00.692 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:00.692 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:00.692 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:00.692 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:00.692 [107/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:00.692 [108/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:00.692 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:00.692 [110/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:00.692 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:00.692 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:00.692 [113/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:00.692 [114/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:00.692 [115/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:00.692 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:00.692 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:00.692 [118/268] Linking static target lib/librte_cmdline.a 00:04:00.692 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:00.692 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:00.692 [121/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:00.692 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:00.692 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:00.692 [124/268] Linking static target lib/librte_rcu.a 00:04:00.692 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:00.692 [126/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:00.692 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:00.692 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:00.692 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:00.692 [130/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:00.692 [131/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:00.692 [132/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:00.692 [133/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:00.692 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:00.692 [135/268] Linking static target lib/librte_timer.a 00:04:00.692 [136/268] Linking static target lib/librte_mempool.a 00:04:00.693 [137/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:00.693 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:00.693 [139/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:00.693 [140/268] Linking static target lib/librte_net.a 00:04:00.693 [141/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:00.693 [142/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:00.693 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:00.693 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:00.693 [145/268] Linking static target lib/librte_eal.a 00:04:00.693 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:00.693 [147/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.693 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:00.693 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:00.693 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:00.950 [151/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.950 [152/268] Linking static target lib/librte_dmadev.a 00:04:00.950 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:00.950 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:00.950 [155/268] Linking static target lib/librte_compressdev.a 00:04:00.950 [156/268] Linking target lib/librte_log.so.24.1 00:04:00.950 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:00.950 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:00.950 [159/268] Linking static target lib/librte_reorder.a 00:04:00.950 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:00.950 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:00.950 [162/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:00.950 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:00.950 [164/268] Linking static target lib/librte_mbuf.a 00:04:00.950 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:00.950 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:00.950 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:00.950 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:00.950 [169/268] Linking static target lib/librte_hash.a 00:04:00.950 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:00.950 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:00.950 [172/268] Linking static target lib/librte_power.a 00:04:00.950 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:00.951 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:00.951 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:00.951 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:00.951 [177/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:00.951 [178/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.951 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:00.951 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:00.951 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:00.951 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:00.951 [183/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:00.951 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:00.951 [185/268] Linking target lib/librte_kvargs.so.24.1 00:04:00.951 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:00.951 [187/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.209 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:01.209 [189/268] Linking static target lib/librte_security.a 00:04:01.209 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:01.209 [191/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.209 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:01.209 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:01.209 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.209 [195/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:01.209 [196/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:01.209 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:01.209 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.209 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:01.209 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.209 [201/268] Linking static target drivers/librte_bus_vdev.a 00:04:01.209 [202/268] Linking static target lib/librte_cryptodev.a 00:04:01.209 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.209 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.209 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.209 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:01.209 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.209 [208/268] Linking target lib/librte_telemetry.so.24.1 00:04:01.209 [209/268] Linking static target drivers/librte_bus_pci.a 00:04:01.209 [210/268] Linking static target drivers/librte_mempool_ring.a 00:04:01.209 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.209 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:01.468 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.468 [214/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:01.726 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.726 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.727 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.727 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:01.727 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.727 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.727 [221/268] Linking static target lib/librte_ethdev.a 00:04:01.984 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.984 [223/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.984 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:01.984 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.984 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.984 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.356 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:03.356 [229/268] Linking static target lib/librte_vhost.a 00:04:03.612 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.505 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.057 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.009 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.267 [234/268] Linking target lib/librte_eal.so.24.1 00:04:14.267 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:14.525 [236/268] Linking target lib/librte_ring.so.24.1 00:04:14.525 [237/268] Linking target lib/librte_meter.so.24.1 00:04:14.525 [238/268] Linking target lib/librte_pci.so.24.1 00:04:14.525 [239/268] Linking target lib/librte_timer.so.24.1 00:04:14.525 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:14.525 [241/268] Linking target lib/librte_dmadev.so.24.1 00:04:14.525 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:14.525 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:14.525 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:14.525 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:14.525 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:14.525 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:14.782 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:14.782 [249/268] Linking target lib/librte_rcu.so.24.1 00:04:14.782 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:14.782 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:14.782 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:14.782 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:15.040 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:15.041 [255/268] Linking target lib/librte_reorder.so.24.1 00:04:15.041 [256/268] Linking target lib/librte_net.so.24.1 00:04:15.041 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:15.041 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:15.298 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:15.298 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:15.298 [261/268] Linking target lib/librte_hash.so.24.1 00:04:15.298 [262/268] Linking target lib/librte_security.so.24.1 00:04:15.298 [263/268] Linking target lib/librte_cmdline.so.24.1 00:04:15.298 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:15.557 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:15.557 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:15.557 [267/268] Linking target lib/librte_power.so.24.1 00:04:15.557 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:15.557 INFO: autodetecting backend as ninja 00:04:15.557 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:04:16.930 CC lib/ut/ut.o 00:04:16.930 CC lib/ut_mock/mock.o 00:04:16.930 CC lib/log/log.o 00:04:16.930 CC lib/log/log_flags.o 00:04:16.930 CC lib/log/log_deprecated.o 00:04:16.930 LIB libspdk_ut.a 00:04:16.930 LIB libspdk_ut_mock.a 00:04:16.930 LIB libspdk_log.a 00:04:16.930 SO libspdk_ut.so.2.0 00:04:16.930 SO libspdk_ut_mock.so.6.0 00:04:16.930 SO libspdk_log.so.7.0 00:04:17.215 SYMLINK libspdk_ut.so 00:04:17.215 SYMLINK libspdk_ut_mock.so 00:04:17.215 SYMLINK libspdk_log.so 00:04:17.473 CC lib/dma/dma.o 00:04:17.473 CC lib/util/base64.o 00:04:17.473 CC lib/util/bit_array.o 00:04:17.473 CC lib/util/cpuset.o 00:04:17.473 CC lib/util/crc16.o 00:04:17.473 CC lib/util/crc32.o 00:04:17.473 CC lib/util/crc32c.o 00:04:17.473 CC lib/util/crc32_ieee.o 00:04:17.473 CC lib/util/crc64.o 00:04:17.473 CXX lib/trace_parser/trace.o 00:04:17.473 CC lib/ioat/ioat.o 00:04:17.473 CC lib/util/dif.o 00:04:17.473 CC lib/util/fd.o 00:04:17.473 CC lib/util/file.o 00:04:17.473 CC lib/util/hexlify.o 00:04:17.473 CC lib/util/iov.o 00:04:17.473 CC lib/util/math.o 00:04:17.473 CC lib/util/pipe.o 00:04:17.473 CC lib/util/strerror_tls.o 00:04:17.473 CC lib/util/string.o 00:04:17.473 CC lib/util/uuid.o 00:04:17.473 CC lib/util/fd_group.o 00:04:17.473 CC lib/util/xor.o 00:04:17.473 CC lib/util/zipf.o 00:04:17.732 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.732 CC lib/vfio_user/host/vfio_user.o 00:04:17.732 LIB libspdk_dma.a 00:04:17.732 SO libspdk_dma.so.4.0 00:04:17.732 SYMLINK libspdk_dma.so 00:04:17.732 LIB libspdk_ioat.a 00:04:17.990 SO libspdk_ioat.so.7.0 00:04:17.990 SYMLINK libspdk_ioat.so 00:04:17.990 LIB libspdk_vfio_user.a 00:04:17.990 SO libspdk_vfio_user.so.5.0 00:04:17.990 LIB libspdk_util.a 00:04:17.990 SYMLINK libspdk_vfio_user.so 00:04:17.990 SO libspdk_util.so.9.0 00:04:18.248 SYMLINK libspdk_util.so 00:04:18.248 LIB libspdk_trace_parser.a 00:04:18.505 SO libspdk_trace_parser.so.5.0 00:04:18.506 SYMLINK libspdk_trace_parser.so 00:04:18.506 CC lib/conf/conf.o 00:04:18.763 CC lib/rdma/common.o 00:04:18.763 CC lib/rdma/rdma_verbs.o 00:04:18.763 CC lib/idxd/idxd.o 00:04:18.763 CC lib/idxd/idxd_user.o 00:04:18.763 CC lib/idxd/idxd_kernel.o 00:04:18.764 CC lib/env_dpdk/env.o 00:04:18.764 CC lib/env_dpdk/memory.o 00:04:18.764 CC lib/env_dpdk/pci.o 00:04:18.764 CC lib/vmd/vmd.o 00:04:18.764 CC lib/env_dpdk/init.o 00:04:18.764 CC lib/vmd/led.o 00:04:18.764 CC lib/env_dpdk/threads.o 00:04:18.764 CC lib/env_dpdk/pci_ioat.o 00:04:18.764 CC lib/env_dpdk/pci_virtio.o 00:04:18.764 CC lib/json/json_parse.o 00:04:18.764 CC lib/env_dpdk/pci_vmd.o 00:04:18.764 CC lib/json/json_util.o 00:04:18.764 CC lib/env_dpdk/pci_idxd.o 00:04:18.764 CC lib/json/json_write.o 00:04:18.764 CC lib/env_dpdk/pci_event.o 00:04:18.764 CC lib/env_dpdk/sigbus_handler.o 00:04:18.764 CC lib/env_dpdk/pci_dpdk.o 00:04:18.764 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:18.764 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.022 LIB libspdk_conf.a 00:04:19.022 SO libspdk_conf.so.6.0 00:04:19.022 LIB libspdk_rdma.a 00:04:19.022 LIB libspdk_json.a 00:04:19.022 SO libspdk_rdma.so.6.0 00:04:19.022 SYMLINK libspdk_conf.so 00:04:19.022 SO libspdk_json.so.6.0 00:04:19.022 SYMLINK libspdk_rdma.so 00:04:19.022 SYMLINK libspdk_json.so 00:04:19.280 LIB libspdk_idxd.a 00:04:19.280 SO libspdk_idxd.so.12.0 00:04:19.280 LIB libspdk_vmd.a 00:04:19.280 SYMLINK libspdk_idxd.so 00:04:19.280 SO libspdk_vmd.so.6.0 00:04:19.538 SYMLINK libspdk_vmd.so 00:04:19.538 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.538 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.538 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.538 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.796 LIB libspdk_jsonrpc.a 00:04:19.796 SO libspdk_jsonrpc.so.6.0 00:04:19.796 SYMLINK libspdk_jsonrpc.so 00:04:20.055 LIB libspdk_env_dpdk.a 00:04:20.055 SO libspdk_env_dpdk.so.14.1 00:04:20.312 SYMLINK libspdk_env_dpdk.so 00:04:20.312 CC lib/rpc/rpc.o 00:04:20.570 LIB libspdk_rpc.a 00:04:20.570 SO libspdk_rpc.so.6.0 00:04:20.570 SYMLINK libspdk_rpc.so 00:04:21.137 CC lib/keyring/keyring_rpc.o 00:04:21.137 CC lib/keyring/keyring.o 00:04:21.137 CC lib/notify/notify.o 00:04:21.137 CC lib/notify/notify_rpc.o 00:04:21.137 CC lib/trace/trace.o 00:04:21.137 CC lib/trace/trace_flags.o 00:04:21.137 CC lib/trace/trace_rpc.o 00:04:21.137 LIB libspdk_notify.a 00:04:21.137 SO libspdk_notify.so.6.0 00:04:21.137 LIB libspdk_keyring.a 00:04:21.137 LIB libspdk_trace.a 00:04:21.137 SO libspdk_keyring.so.1.0 00:04:21.137 SYMLINK libspdk_notify.so 00:04:21.396 SO libspdk_trace.so.10.0 00:04:21.396 SYMLINK libspdk_keyring.so 00:04:21.396 SYMLINK libspdk_trace.so 00:04:21.655 CC lib/sock/sock.o 00:04:21.655 CC lib/sock/sock_rpc.o 00:04:21.655 CC lib/thread/thread.o 00:04:21.655 CC lib/thread/iobuf.o 00:04:22.221 LIB libspdk_sock.a 00:04:22.221 SO libspdk_sock.so.9.0 00:04:22.221 SYMLINK libspdk_sock.so 00:04:22.480 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.480 CC lib/nvme/nvme_ctrlr.o 00:04:22.480 CC lib/nvme/nvme_fabric.o 00:04:22.480 CC lib/nvme/nvme_ns_cmd.o 00:04:22.480 CC lib/nvme/nvme_ns.o 00:04:22.480 CC lib/nvme/nvme_pcie_common.o 00:04:22.480 CC lib/nvme/nvme_pcie.o 00:04:22.480 CC lib/nvme/nvme_qpair.o 00:04:22.480 CC lib/nvme/nvme.o 00:04:22.480 CC lib/nvme/nvme_quirks.o 00:04:22.480 CC lib/nvme/nvme_transport.o 00:04:22.480 CC lib/nvme/nvme_discovery.o 00:04:22.480 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.480 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.480 CC lib/nvme/nvme_tcp.o 00:04:22.480 CC lib/nvme/nvme_opal.o 00:04:22.480 CC lib/nvme/nvme_io_msg.o 00:04:22.480 CC lib/nvme/nvme_poll_group.o 00:04:22.480 CC lib/nvme/nvme_zns.o 00:04:22.480 CC lib/nvme/nvme_stubs.o 00:04:22.739 CC lib/nvme/nvme_auth.o 00:04:22.739 CC lib/nvme/nvme_cuse.o 00:04:22.739 CC lib/nvme/nvme_rdma.o 00:04:23.306 LIB libspdk_thread.a 00:04:23.306 SO libspdk_thread.so.10.0 00:04:23.306 SYMLINK libspdk_thread.so 00:04:23.564 CC lib/init/subsystem.o 00:04:23.564 CC lib/init/json_config.o 00:04:23.564 CC lib/init/rpc.o 00:04:23.564 CC lib/init/subsystem_rpc.o 00:04:23.564 CC lib/accel/accel.o 00:04:23.564 CC lib/accel/accel_rpc.o 00:04:23.564 CC lib/accel/accel_sw.o 00:04:23.564 CC lib/virtio/virtio.o 00:04:23.564 CC lib/virtio/virtio_vhost_user.o 00:04:23.564 CC lib/virtio/virtio_vfio_user.o 00:04:23.564 CC lib/virtio/virtio_pci.o 00:04:23.564 CC lib/blob/request.o 00:04:23.564 CC lib/blob/blobstore.o 00:04:23.564 CC lib/blob/zeroes.o 00:04:23.564 CC lib/blob/blob_bs_dev.o 00:04:23.823 LIB libspdk_init.a 00:04:23.823 SO libspdk_init.so.5.0 00:04:24.082 LIB libspdk_virtio.a 00:04:24.082 SO libspdk_virtio.so.7.0 00:04:24.082 SYMLINK libspdk_init.so 00:04:24.082 SYMLINK libspdk_virtio.so 00:04:24.342 CC lib/event/app.o 00:04:24.342 CC lib/event/reactor.o 00:04:24.342 CC lib/event/log_rpc.o 00:04:24.342 CC lib/event/app_rpc.o 00:04:24.342 CC lib/event/scheduler_static.o 00:04:24.601 LIB libspdk_accel.a 00:04:24.601 SO libspdk_accel.so.15.0 00:04:24.601 LIB libspdk_nvme.a 00:04:24.601 SYMLINK libspdk_accel.so 00:04:24.859 SO libspdk_nvme.so.13.0 00:04:24.859 LIB libspdk_event.a 00:04:24.859 SO libspdk_event.so.13.1 00:04:24.859 SYMLINK libspdk_event.so 00:04:25.118 CC lib/bdev/bdev.o 00:04:25.118 CC lib/bdev/bdev_rpc.o 00:04:25.118 CC lib/bdev/bdev_zone.o 00:04:25.118 CC lib/bdev/part.o 00:04:25.118 CC lib/bdev/scsi_nvme.o 00:04:25.118 SYMLINK libspdk_nvme.so 00:04:26.497 LIB libspdk_blob.a 00:04:26.497 SO libspdk_blob.so.11.0 00:04:26.497 SYMLINK libspdk_blob.so 00:04:26.756 CC lib/blobfs/blobfs.o 00:04:26.756 CC lib/blobfs/tree.o 00:04:27.015 CC lib/lvol/lvol.o 00:04:27.584 LIB libspdk_bdev.a 00:04:27.584 SO libspdk_bdev.so.15.0 00:04:27.584 LIB libspdk_blobfs.a 00:04:27.584 SYMLINK libspdk_bdev.so 00:04:27.584 SO libspdk_blobfs.so.10.0 00:04:27.843 LIB libspdk_lvol.a 00:04:27.843 SYMLINK libspdk_blobfs.so 00:04:27.843 SO libspdk_lvol.so.10.0 00:04:27.843 SYMLINK libspdk_lvol.so 00:04:28.104 CC lib/scsi/dev.o 00:04:28.104 CC lib/scsi/lun.o 00:04:28.104 CC lib/scsi/port.o 00:04:28.104 CC lib/scsi/scsi.o 00:04:28.104 CC lib/ftl/ftl_debug.o 00:04:28.104 CC lib/scsi/scsi_bdev.o 00:04:28.104 CC lib/ftl/ftl_core.o 00:04:28.104 CC lib/ublk/ublk.o 00:04:28.104 CC lib/ftl/ftl_init.o 00:04:28.104 CC lib/ftl/ftl_layout.o 00:04:28.104 CC lib/scsi/scsi_pr.o 00:04:28.104 CC lib/ublk/ublk_rpc.o 00:04:28.104 CC lib/nvmf/ctrlr.o 00:04:28.104 CC lib/scsi/scsi_rpc.o 00:04:28.104 CC lib/ftl/ftl_io.o 00:04:28.104 CC lib/nvmf/ctrlr_discovery.o 00:04:28.104 CC lib/ftl/ftl_sb.o 00:04:28.104 CC lib/scsi/task.o 00:04:28.104 CC lib/nvmf/ctrlr_bdev.o 00:04:28.104 CC lib/nbd/nbd.o 00:04:28.104 CC lib/ftl/ftl_l2p.o 00:04:28.104 CC lib/nvmf/subsystem.o 00:04:28.104 CC lib/nbd/nbd_rpc.o 00:04:28.104 CC lib/ftl/ftl_l2p_flat.o 00:04:28.104 CC lib/nvmf/nvmf.o 00:04:28.104 CC lib/ftl/ftl_nv_cache.o 00:04:28.104 CC lib/nvmf/nvmf_rpc.o 00:04:28.104 CC lib/ftl/ftl_band.o 00:04:28.104 CC lib/ftl/ftl_band_ops.o 00:04:28.104 CC lib/nvmf/transport.o 00:04:28.104 CC lib/ftl/ftl_writer.o 00:04:28.104 CC lib/nvmf/tcp.o 00:04:28.104 CC lib/ftl/ftl_rq.o 00:04:28.104 CC lib/nvmf/stubs.o 00:04:28.104 CC lib/nvmf/mdns_server.o 00:04:28.104 CC lib/ftl/ftl_reloc.o 00:04:28.104 CC lib/ftl/ftl_l2p_cache.o 00:04:28.104 CC lib/nvmf/rdma.o 00:04:28.104 CC lib/ftl/ftl_p2l.o 00:04:28.104 CC lib/nvmf/auth.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.104 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.104 CC lib/ftl/utils/ftl_conf.o 00:04:28.104 CC lib/ftl/utils/ftl_md.o 00:04:28.104 CC lib/ftl/utils/ftl_mempool.o 00:04:28.104 CC lib/ftl/utils/ftl_bitmap.o 00:04:28.104 CC lib/ftl/utils/ftl_property.o 00:04:28.104 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:28.104 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:28.104 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:28.104 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:28.104 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:28.104 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:28.104 CC lib/ftl/base/ftl_base_dev.o 00:04:28.104 CC lib/ftl/base/ftl_base_bdev.o 00:04:28.104 CC lib/ftl/ftl_trace.o 00:04:28.671 LIB libspdk_nbd.a 00:04:28.671 SO libspdk_nbd.so.7.0 00:04:28.671 LIB libspdk_ublk.a 00:04:28.671 LIB libspdk_scsi.a 00:04:28.671 SO libspdk_ublk.so.3.0 00:04:28.671 SYMLINK libspdk_nbd.so 00:04:28.671 SO libspdk_scsi.so.9.0 00:04:28.930 SYMLINK libspdk_ublk.so 00:04:28.930 SYMLINK libspdk_scsi.so 00:04:28.930 LIB libspdk_ftl.a 00:04:29.189 SO libspdk_ftl.so.9.0 00:04:29.189 CC lib/iscsi/conn.o 00:04:29.189 CC lib/iscsi/init_grp.o 00:04:29.189 CC lib/vhost/vhost.o 00:04:29.189 CC lib/iscsi/iscsi.o 00:04:29.189 CC lib/iscsi/md5.o 00:04:29.189 CC lib/vhost/vhost_rpc.o 00:04:29.189 CC lib/vhost/vhost_scsi.o 00:04:29.189 CC lib/iscsi/param.o 00:04:29.189 CC lib/iscsi/portal_grp.o 00:04:29.189 CC lib/vhost/vhost_blk.o 00:04:29.189 CC lib/iscsi/tgt_node.o 00:04:29.189 CC lib/vhost/rte_vhost_user.o 00:04:29.189 CC lib/iscsi/iscsi_subsystem.o 00:04:29.189 CC lib/iscsi/iscsi_rpc.o 00:04:29.189 CC lib/iscsi/task.o 00:04:29.490 SYMLINK libspdk_ftl.so 00:04:30.426 LIB libspdk_nvmf.a 00:04:30.426 LIB libspdk_vhost.a 00:04:30.426 SO libspdk_vhost.so.8.0 00:04:30.426 SO libspdk_nvmf.so.18.1 00:04:30.426 SYMLINK libspdk_vhost.so 00:04:30.426 LIB libspdk_iscsi.a 00:04:30.426 SYMLINK libspdk_nvmf.so 00:04:30.685 SO libspdk_iscsi.so.8.0 00:04:30.685 SYMLINK libspdk_iscsi.so 00:04:31.254 CC module/env_dpdk/env_dpdk_rpc.o 00:04:31.514 CC module/accel/ioat/accel_ioat.o 00:04:31.514 CC module/accel/ioat/accel_ioat_rpc.o 00:04:31.514 CC module/keyring/linux/keyring_rpc.o 00:04:31.514 CC module/accel/dsa/accel_dsa.o 00:04:31.514 CC module/keyring/linux/keyring.o 00:04:31.514 CC module/accel/error/accel_error.o 00:04:31.514 CC module/accel/error/accel_error_rpc.o 00:04:31.514 CC module/accel/dsa/accel_dsa_rpc.o 00:04:31.514 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:31.514 CC module/accel/iaa/accel_iaa.o 00:04:31.514 CC module/accel/iaa/accel_iaa_rpc.o 00:04:31.514 CC module/blob/bdev/blob_bdev.o 00:04:31.514 LIB libspdk_env_dpdk_rpc.a 00:04:31.514 CC module/sock/posix/posix.o 00:04:31.514 CC module/scheduler/gscheduler/gscheduler.o 00:04:31.514 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:31.514 CC module/keyring/file/keyring.o 00:04:31.514 CC module/keyring/file/keyring_rpc.o 00:04:31.514 SO libspdk_env_dpdk_rpc.so.6.0 00:04:31.514 SYMLINK libspdk_env_dpdk_rpc.so 00:04:31.773 LIB libspdk_keyring_linux.a 00:04:31.773 LIB libspdk_keyring_file.a 00:04:31.773 LIB libspdk_scheduler_dpdk_governor.a 00:04:31.773 LIB libspdk_scheduler_gscheduler.a 00:04:31.773 LIB libspdk_accel_ioat.a 00:04:31.773 LIB libspdk_accel_error.a 00:04:31.773 SO libspdk_keyring_file.so.1.0 00:04:31.773 SO libspdk_scheduler_gscheduler.so.4.0 00:04:31.773 SO libspdk_keyring_linux.so.1.0 00:04:31.773 LIB libspdk_scheduler_dynamic.a 00:04:31.773 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:31.773 LIB libspdk_accel_iaa.a 00:04:31.773 SO libspdk_accel_ioat.so.6.0 00:04:31.773 SO libspdk_accel_error.so.2.0 00:04:31.773 SO libspdk_scheduler_dynamic.so.4.0 00:04:31.773 LIB libspdk_accel_dsa.a 00:04:31.773 LIB libspdk_blob_bdev.a 00:04:31.773 SO libspdk_accel_iaa.so.3.0 00:04:31.773 SYMLINK libspdk_keyring_file.so 00:04:31.773 SYMLINK libspdk_scheduler_gscheduler.so 00:04:31.773 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:31.773 SYMLINK libspdk_keyring_linux.so 00:04:31.773 SO libspdk_accel_dsa.so.5.0 00:04:31.773 SYMLINK libspdk_scheduler_dynamic.so 00:04:31.773 SO libspdk_blob_bdev.so.11.0 00:04:31.773 SYMLINK libspdk_accel_ioat.so 00:04:31.773 SYMLINK libspdk_accel_error.so 00:04:31.773 SYMLINK libspdk_accel_iaa.so 00:04:32.033 SYMLINK libspdk_blob_bdev.so 00:04:32.033 SYMLINK libspdk_accel_dsa.so 00:04:32.293 LIB libspdk_sock_posix.a 00:04:32.293 SO libspdk_sock_posix.so.6.0 00:04:32.293 SYMLINK libspdk_sock_posix.so 00:04:32.551 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:32.551 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:32.551 CC module/bdev/ftl/bdev_ftl.o 00:04:32.551 CC module/blobfs/bdev/blobfs_bdev.o 00:04:32.551 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:32.551 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:32.551 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:32.551 CC module/bdev/nvme/bdev_nvme.o 00:04:32.551 CC module/bdev/nvme/nvme_rpc.o 00:04:32.551 CC module/bdev/error/vbdev_error.o 00:04:32.551 CC module/bdev/nvme/bdev_mdns_client.o 00:04:32.551 CC module/bdev/delay/vbdev_delay.o 00:04:32.551 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:32.551 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:32.551 CC module/bdev/malloc/bdev_malloc.o 00:04:32.551 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:32.551 CC module/bdev/lvol/vbdev_lvol.o 00:04:32.551 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:32.551 CC module/bdev/nvme/vbdev_opal.o 00:04:32.551 CC module/bdev/error/vbdev_error_rpc.o 00:04:32.551 CC module/bdev/gpt/gpt.o 00:04:32.551 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:32.551 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:32.551 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:32.551 CC module/bdev/gpt/vbdev_gpt.o 00:04:32.551 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:32.551 CC module/bdev/null/bdev_null.o 00:04:32.551 CC module/bdev/passthru/vbdev_passthru.o 00:04:32.551 CC module/bdev/null/bdev_null_rpc.o 00:04:32.551 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:32.551 CC module/bdev/raid/bdev_raid.o 00:04:32.551 CC module/bdev/aio/bdev_aio_rpc.o 00:04:32.551 CC module/bdev/raid/bdev_raid_rpc.o 00:04:32.551 CC module/bdev/aio/bdev_aio.o 00:04:32.551 CC module/bdev/raid/raid0.o 00:04:32.551 CC module/bdev/raid/bdev_raid_sb.o 00:04:32.551 CC module/bdev/iscsi/bdev_iscsi.o 00:04:32.551 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:32.551 CC module/bdev/raid/concat.o 00:04:32.551 CC module/bdev/raid/raid1.o 00:04:32.551 CC module/bdev/split/vbdev_split.o 00:04:32.551 CC module/bdev/split/vbdev_split_rpc.o 00:04:32.810 LIB libspdk_blobfs_bdev.a 00:04:32.810 SO libspdk_blobfs_bdev.so.6.0 00:04:32.810 LIB libspdk_bdev_split.a 00:04:32.810 LIB libspdk_bdev_ftl.a 00:04:32.810 LIB libspdk_bdev_null.a 00:04:32.810 LIB libspdk_bdev_error.a 00:04:32.810 LIB libspdk_bdev_gpt.a 00:04:32.810 SYMLINK libspdk_blobfs_bdev.so 00:04:32.810 LIB libspdk_bdev_zone_block.a 00:04:32.810 SO libspdk_bdev_ftl.so.6.0 00:04:32.810 SO libspdk_bdev_split.so.6.0 00:04:32.810 SO libspdk_bdev_null.so.6.0 00:04:32.810 LIB libspdk_bdev_passthru.a 00:04:32.810 SO libspdk_bdev_error.so.6.0 00:04:32.810 SO libspdk_bdev_gpt.so.6.0 00:04:32.810 SO libspdk_bdev_zone_block.so.6.0 00:04:32.810 LIB libspdk_bdev_aio.a 00:04:32.810 LIB libspdk_bdev_malloc.a 00:04:32.810 SO libspdk_bdev_passthru.so.6.0 00:04:32.810 SYMLINK libspdk_bdev_null.so 00:04:32.810 LIB libspdk_bdev_iscsi.a 00:04:32.810 LIB libspdk_bdev_delay.a 00:04:32.810 SYMLINK libspdk_bdev_ftl.so 00:04:32.810 SYMLINK libspdk_bdev_split.so 00:04:33.069 SYMLINK libspdk_bdev_error.so 00:04:33.069 SO libspdk_bdev_aio.so.6.0 00:04:33.069 SO libspdk_bdev_malloc.so.6.0 00:04:33.069 SYMLINK libspdk_bdev_gpt.so 00:04:33.069 SYMLINK libspdk_bdev_zone_block.so 00:04:33.069 LIB libspdk_bdev_lvol.a 00:04:33.069 SO libspdk_bdev_iscsi.so.6.0 00:04:33.069 SO libspdk_bdev_delay.so.6.0 00:04:33.069 SYMLINK libspdk_bdev_passthru.so 00:04:33.069 SYMLINK libspdk_bdev_malloc.so 00:04:33.069 SYMLINK libspdk_bdev_aio.so 00:04:33.069 SO libspdk_bdev_lvol.so.6.0 00:04:33.069 LIB libspdk_bdev_virtio.a 00:04:33.069 SYMLINK libspdk_bdev_iscsi.so 00:04:33.069 SYMLINK libspdk_bdev_delay.so 00:04:33.069 SO libspdk_bdev_virtio.so.6.0 00:04:33.069 SYMLINK libspdk_bdev_lvol.so 00:04:33.069 SYMLINK libspdk_bdev_virtio.so 00:04:33.328 LIB libspdk_bdev_raid.a 00:04:33.328 SO libspdk_bdev_raid.so.6.0 00:04:33.328 SYMLINK libspdk_bdev_raid.so 00:04:34.705 LIB libspdk_bdev_nvme.a 00:04:34.705 SO libspdk_bdev_nvme.so.7.0 00:04:34.705 SYMLINK libspdk_bdev_nvme.so 00:04:35.642 CC module/event/subsystems/vmd/vmd.o 00:04:35.642 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:35.642 CC module/event/subsystems/keyring/keyring.o 00:04:35.642 CC module/event/subsystems/iobuf/iobuf.o 00:04:35.642 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:35.642 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:35.642 CC module/event/subsystems/scheduler/scheduler.o 00:04:35.642 CC module/event/subsystems/sock/sock.o 00:04:35.642 LIB libspdk_event_keyring.a 00:04:35.642 LIB libspdk_event_vmd.a 00:04:35.642 LIB libspdk_event_sock.a 00:04:35.642 LIB libspdk_event_vhost_blk.a 00:04:35.642 LIB libspdk_event_scheduler.a 00:04:35.642 LIB libspdk_event_iobuf.a 00:04:35.642 SO libspdk_event_keyring.so.1.0 00:04:35.642 SO libspdk_event_sock.so.5.0 00:04:35.642 SO libspdk_event_scheduler.so.4.0 00:04:35.642 SO libspdk_event_vmd.so.6.0 00:04:35.642 SO libspdk_event_vhost_blk.so.3.0 00:04:35.642 SO libspdk_event_iobuf.so.3.0 00:04:35.642 SYMLINK libspdk_event_keyring.so 00:04:35.901 SYMLINK libspdk_event_sock.so 00:04:35.901 SYMLINK libspdk_event_vhost_blk.so 00:04:35.901 SYMLINK libspdk_event_scheduler.so 00:04:35.901 SYMLINK libspdk_event_vmd.so 00:04:35.901 SYMLINK libspdk_event_iobuf.so 00:04:36.160 CC module/event/subsystems/accel/accel.o 00:04:36.418 LIB libspdk_event_accel.a 00:04:36.418 SO libspdk_event_accel.so.6.0 00:04:36.418 SYMLINK libspdk_event_accel.so 00:04:36.985 CC module/event/subsystems/bdev/bdev.o 00:04:36.985 LIB libspdk_event_bdev.a 00:04:36.985 SO libspdk_event_bdev.so.6.0 00:04:36.985 SYMLINK libspdk_event_bdev.so 00:04:37.552 CC module/event/subsystems/nbd/nbd.o 00:04:37.552 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:37.552 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:37.552 CC module/event/subsystems/ublk/ublk.o 00:04:37.552 CC module/event/subsystems/scsi/scsi.o 00:04:37.552 LIB libspdk_event_nbd.a 00:04:37.552 LIB libspdk_event_ublk.a 00:04:37.552 LIB libspdk_event_scsi.a 00:04:37.552 SO libspdk_event_nbd.so.6.0 00:04:37.552 SO libspdk_event_ublk.so.3.0 00:04:37.810 SO libspdk_event_scsi.so.6.0 00:04:37.810 LIB libspdk_event_nvmf.a 00:04:37.810 SYMLINK libspdk_event_nbd.so 00:04:37.810 SYMLINK libspdk_event_ublk.so 00:04:37.810 SO libspdk_event_nvmf.so.6.0 00:04:37.810 SYMLINK libspdk_event_scsi.so 00:04:37.810 SYMLINK libspdk_event_nvmf.so 00:04:38.069 CC module/event/subsystems/iscsi/iscsi.o 00:04:38.069 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:38.328 LIB libspdk_event_vhost_scsi.a 00:04:38.328 LIB libspdk_event_iscsi.a 00:04:38.328 SO libspdk_event_vhost_scsi.so.3.0 00:04:38.328 SO libspdk_event_iscsi.so.6.0 00:04:38.328 SYMLINK libspdk_event_vhost_scsi.so 00:04:38.328 SYMLINK libspdk_event_iscsi.so 00:04:38.587 SO libspdk.so.6.0 00:04:38.587 SYMLINK libspdk.so 00:04:39.160 TEST_HEADER include/spdk/accel.h 00:04:39.160 TEST_HEADER include/spdk/accel_module.h 00:04:39.160 TEST_HEADER include/spdk/assert.h 00:04:39.160 TEST_HEADER include/spdk/barrier.h 00:04:39.160 TEST_HEADER include/spdk/base64.h 00:04:39.160 CC test/rpc_client/rpc_client_test.o 00:04:39.160 TEST_HEADER include/spdk/bdev.h 00:04:39.160 TEST_HEADER include/spdk/bdev_module.h 00:04:39.160 TEST_HEADER include/spdk/bdev_zone.h 00:04:39.160 TEST_HEADER include/spdk/bit_array.h 00:04:39.160 CC app/spdk_nvme_discover/discovery_aer.o 00:04:39.160 CC app/spdk_lspci/spdk_lspci.o 00:04:39.160 TEST_HEADER include/spdk/bit_pool.h 00:04:39.160 TEST_HEADER include/spdk/blob_bdev.h 00:04:39.160 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:39.160 TEST_HEADER include/spdk/blobfs.h 00:04:39.160 TEST_HEADER include/spdk/conf.h 00:04:39.160 TEST_HEADER include/spdk/blob.h 00:04:39.160 CC app/trace_record/trace_record.o 00:04:39.160 TEST_HEADER include/spdk/cpuset.h 00:04:39.160 TEST_HEADER include/spdk/config.h 00:04:39.160 CXX app/trace/trace.o 00:04:39.160 TEST_HEADER include/spdk/crc32.h 00:04:39.160 TEST_HEADER include/spdk/crc64.h 00:04:39.160 TEST_HEADER include/spdk/crc16.h 00:04:39.160 CC app/spdk_nvme_perf/perf.o 00:04:39.160 TEST_HEADER include/spdk/dif.h 00:04:39.160 CC app/spdk_top/spdk_top.o 00:04:39.160 TEST_HEADER include/spdk/dma.h 00:04:39.160 TEST_HEADER include/spdk/env_dpdk.h 00:04:39.160 TEST_HEADER include/spdk/endian.h 00:04:39.160 TEST_HEADER include/spdk/env.h 00:04:39.160 CC app/spdk_nvme_identify/identify.o 00:04:39.160 TEST_HEADER include/spdk/event.h 00:04:39.160 TEST_HEADER include/spdk/fd_group.h 00:04:39.160 TEST_HEADER include/spdk/fd.h 00:04:39.161 TEST_HEADER include/spdk/file.h 00:04:39.161 TEST_HEADER include/spdk/ftl.h 00:04:39.161 TEST_HEADER include/spdk/gpt_spec.h 00:04:39.161 TEST_HEADER include/spdk/hexlify.h 00:04:39.161 TEST_HEADER include/spdk/histogram_data.h 00:04:39.161 TEST_HEADER include/spdk/idxd.h 00:04:39.161 TEST_HEADER include/spdk/init.h 00:04:39.161 TEST_HEADER include/spdk/idxd_spec.h 00:04:39.161 TEST_HEADER include/spdk/ioat.h 00:04:39.161 TEST_HEADER include/spdk/ioat_spec.h 00:04:39.161 TEST_HEADER include/spdk/iscsi_spec.h 00:04:39.161 TEST_HEADER include/spdk/json.h 00:04:39.161 TEST_HEADER include/spdk/jsonrpc.h 00:04:39.161 TEST_HEADER include/spdk/keyring.h 00:04:39.161 TEST_HEADER include/spdk/keyring_module.h 00:04:39.161 TEST_HEADER include/spdk/likely.h 00:04:39.161 TEST_HEADER include/spdk/lvol.h 00:04:39.161 TEST_HEADER include/spdk/log.h 00:04:39.161 TEST_HEADER include/spdk/memory.h 00:04:39.161 TEST_HEADER include/spdk/mmio.h 00:04:39.161 TEST_HEADER include/spdk/notify.h 00:04:39.161 TEST_HEADER include/spdk/nbd.h 00:04:39.161 TEST_HEADER include/spdk/nvme.h 00:04:39.161 CC app/iscsi_tgt/iscsi_tgt.o 00:04:39.161 TEST_HEADER include/spdk/nvme_intel.h 00:04:39.161 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:39.161 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:39.161 TEST_HEADER include/spdk/nvme_spec.h 00:04:39.161 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:39.161 TEST_HEADER include/spdk/nvme_zns.h 00:04:39.161 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:39.161 TEST_HEADER include/spdk/nvmf.h 00:04:39.161 TEST_HEADER include/spdk/nvmf_transport.h 00:04:39.161 TEST_HEADER include/spdk/nvmf_spec.h 00:04:39.161 TEST_HEADER include/spdk/opal.h 00:04:39.161 TEST_HEADER include/spdk/pci_ids.h 00:04:39.161 TEST_HEADER include/spdk/opal_spec.h 00:04:39.161 TEST_HEADER include/spdk/pipe.h 00:04:39.161 TEST_HEADER include/spdk/queue.h 00:04:39.161 TEST_HEADER include/spdk/reduce.h 00:04:39.161 TEST_HEADER include/spdk/rpc.h 00:04:39.161 TEST_HEADER include/spdk/scheduler.h 00:04:39.161 CC app/spdk_dd/spdk_dd.o 00:04:39.161 TEST_HEADER include/spdk/scsi.h 00:04:39.161 TEST_HEADER include/spdk/sock.h 00:04:39.161 TEST_HEADER include/spdk/scsi_spec.h 00:04:39.161 TEST_HEADER include/spdk/stdinc.h 00:04:39.161 TEST_HEADER include/spdk/string.h 00:04:39.161 TEST_HEADER include/spdk/thread.h 00:04:39.161 TEST_HEADER include/spdk/trace.h 00:04:39.161 TEST_HEADER include/spdk/trace_parser.h 00:04:39.161 TEST_HEADER include/spdk/tree.h 00:04:39.161 TEST_HEADER include/spdk/ublk.h 00:04:39.161 CC app/nvmf_tgt/nvmf_main.o 00:04:39.161 TEST_HEADER include/spdk/util.h 00:04:39.161 TEST_HEADER include/spdk/uuid.h 00:04:39.161 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:39.161 TEST_HEADER include/spdk/version.h 00:04:39.161 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:39.161 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:39.161 TEST_HEADER include/spdk/vhost.h 00:04:39.161 TEST_HEADER include/spdk/vmd.h 00:04:39.161 TEST_HEADER include/spdk/xor.h 00:04:39.161 TEST_HEADER include/spdk/zipf.h 00:04:39.161 CXX test/cpp_headers/accel.o 00:04:39.161 CXX test/cpp_headers/accel_module.o 00:04:39.161 CXX test/cpp_headers/assert.o 00:04:39.161 CXX test/cpp_headers/base64.o 00:04:39.161 CXX test/cpp_headers/barrier.o 00:04:39.161 CXX test/cpp_headers/bdev.o 00:04:39.161 CXX test/cpp_headers/bdev_module.o 00:04:39.161 CC app/vhost/vhost.o 00:04:39.161 CXX test/cpp_headers/bdev_zone.o 00:04:39.161 CXX test/cpp_headers/bit_array.o 00:04:39.161 CXX test/cpp_headers/bit_pool.o 00:04:39.161 CXX test/cpp_headers/blobfs.o 00:04:39.161 CXX test/cpp_headers/blobfs_bdev.o 00:04:39.161 CXX test/cpp_headers/blob_bdev.o 00:04:39.161 CXX test/cpp_headers/blob.o 00:04:39.161 CXX test/cpp_headers/conf.o 00:04:39.161 CXX test/cpp_headers/config.o 00:04:39.161 CXX test/cpp_headers/cpuset.o 00:04:39.161 CXX test/cpp_headers/crc16.o 00:04:39.161 CXX test/cpp_headers/crc32.o 00:04:39.161 CXX test/cpp_headers/crc64.o 00:04:39.161 CXX test/cpp_headers/dif.o 00:04:39.161 CXX test/cpp_headers/dma.o 00:04:39.161 CXX test/cpp_headers/endian.o 00:04:39.161 CXX test/cpp_headers/env.o 00:04:39.161 CXX test/cpp_headers/env_dpdk.o 00:04:39.161 CXX test/cpp_headers/event.o 00:04:39.161 CXX test/cpp_headers/fd_group.o 00:04:39.161 CXX test/cpp_headers/fd.o 00:04:39.161 CXX test/cpp_headers/file.o 00:04:39.161 CXX test/cpp_headers/ftl.o 00:04:39.161 CXX test/cpp_headers/gpt_spec.o 00:04:39.161 CXX test/cpp_headers/hexlify.o 00:04:39.161 CXX test/cpp_headers/histogram_data.o 00:04:39.161 CXX test/cpp_headers/idxd.o 00:04:39.161 CXX test/cpp_headers/idxd_spec.o 00:04:39.161 CXX test/cpp_headers/init.o 00:04:39.161 CXX test/cpp_headers/ioat.o 00:04:39.161 CC app/spdk_tgt/spdk_tgt.o 00:04:39.161 CC test/app/histogram_perf/histogram_perf.o 00:04:39.161 CC test/app/jsoncat/jsoncat.o 00:04:39.161 CC test/env/vtophys/vtophys.o 00:04:39.161 CXX test/cpp_headers/ioat_spec.o 00:04:39.161 CC test/app/stub/stub.o 00:04:39.161 CC examples/vmd/lsvmd/lsvmd.o 00:04:39.425 CC test/event/reactor_perf/reactor_perf.o 00:04:39.425 CC examples/sock/hello_world/hello_sock.o 00:04:39.425 CC test/event/event_perf/event_perf.o 00:04:39.425 CC test/event/reactor/reactor.o 00:04:39.425 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:39.425 CC test/env/memory/memory_ut.o 00:04:39.425 CC test/env/pci/pci_ut.o 00:04:39.425 CC examples/util/zipf/zipf.o 00:04:39.425 CC test/nvme/fused_ordering/fused_ordering.o 00:04:39.425 CC examples/vmd/led/led.o 00:04:39.425 CC test/nvme/reserve/reserve.o 00:04:39.425 CC test/nvme/overhead/overhead.o 00:04:39.425 CC examples/nvme/hotplug/hotplug.o 00:04:39.425 CC examples/ioat/verify/verify.o 00:04:39.425 CC test/nvme/compliance/nvme_compliance.o 00:04:39.425 CC test/nvme/e2edp/nvme_dp.o 00:04:39.425 CC examples/idxd/perf/perf.o 00:04:39.425 CC test/thread/poller_perf/poller_perf.o 00:04:39.425 CC test/nvme/aer/aer.o 00:04:39.425 CC test/nvme/connect_stress/connect_stress.o 00:04:39.425 CC examples/nvme/hello_world/hello_world.o 00:04:39.425 CC examples/ioat/perf/perf.o 00:04:39.425 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:39.425 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:39.425 CC test/nvme/err_injection/err_injection.o 00:04:39.425 CC test/nvme/cuse/cuse.o 00:04:39.425 CC test/app/bdev_svc/bdev_svc.o 00:04:39.425 CC examples/accel/perf/accel_perf.o 00:04:39.425 CC test/nvme/reset/reset.o 00:04:39.425 CC test/dma/test_dma/test_dma.o 00:04:39.425 CC test/event/app_repeat/app_repeat.o 00:04:39.425 CC test/nvme/sgl/sgl.o 00:04:39.425 CC app/fio/nvme/fio_plugin.o 00:04:39.425 CC test/nvme/fdp/fdp.o 00:04:39.425 CC test/nvme/boot_partition/boot_partition.o 00:04:39.425 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:39.425 CC test/blobfs/mkfs/mkfs.o 00:04:39.425 CC test/nvme/simple_copy/simple_copy.o 00:04:39.425 CC examples/nvme/arbitration/arbitration.o 00:04:39.425 CC examples/blob/hello_world/hello_blob.o 00:04:39.425 CC examples/nvme/reconnect/reconnect.o 00:04:39.425 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:39.425 CC test/nvme/startup/startup.o 00:04:39.425 CC examples/nvme/abort/abort.o 00:04:39.425 CC examples/blob/cli/blobcli.o 00:04:39.425 CC test/bdev/bdevio/bdevio.o 00:04:39.425 CC examples/thread/thread/thread_ex.o 00:04:39.425 CC test/accel/dif/dif.o 00:04:39.425 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.425 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.425 CC test/event/scheduler/scheduler.o 00:04:39.425 CC examples/nvmf/nvmf/nvmf.o 00:04:39.425 CC app/fio/bdev/fio_plugin.o 00:04:39.688 LINK rpc_client_test 00:04:39.688 LINK spdk_lspci 00:04:39.688 CC test/lvol/esnap/esnap.o 00:04:39.688 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:39.688 CC test/env/mem_callbacks/mem_callbacks.o 00:04:39.688 LINK spdk_nvme_discover 00:04:39.950 LINK interrupt_tgt 00:04:39.950 LINK jsoncat 00:04:39.950 LINK histogram_perf 00:04:39.950 LINK lsvmd 00:04:39.950 LINK zipf 00:04:39.950 LINK spdk_trace_record 00:04:39.950 LINK vtophys 00:04:39.950 LINK poller_perf 00:04:39.950 LINK app_repeat 00:04:39.950 LINK stub 00:04:39.950 LINK nvmf_tgt 00:04:39.950 LINK event_perf 00:04:39.950 LINK reactor_perf 00:04:39.950 LINK vhost 00:04:39.950 LINK reactor 00:04:39.950 CXX test/cpp_headers/iscsi_spec.o 00:04:39.950 LINK boot_partition 00:04:39.950 LINK iscsi_tgt 00:04:39.950 CXX test/cpp_headers/json.o 00:04:39.950 CXX test/cpp_headers/jsonrpc.o 00:04:39.950 CXX test/cpp_headers/keyring.o 00:04:39.950 CXX test/cpp_headers/keyring_module.o 00:04:39.950 LINK pmr_persistence 00:04:39.950 CXX test/cpp_headers/likely.o 00:04:39.950 CXX test/cpp_headers/log.o 00:04:39.950 LINK connect_stress 00:04:39.950 CXX test/cpp_headers/lvol.o 00:04:39.950 LINK env_dpdk_post_init 00:04:39.950 CXX test/cpp_headers/memory.o 00:04:39.950 LINK reserve 00:04:39.950 LINK startup 00:04:39.950 CXX test/cpp_headers/mmio.o 00:04:39.950 CXX test/cpp_headers/nbd.o 00:04:39.950 LINK err_injection 00:04:39.950 LINK mkfs 00:04:39.950 LINK led 00:04:39.950 CXX test/cpp_headers/notify.o 00:04:39.951 LINK hello_sock 00:04:39.951 CXX test/cpp_headers/nvme.o 00:04:39.951 LINK spdk_tgt 00:04:39.951 CXX test/cpp_headers/nvme_intel.o 00:04:39.951 CXX test/cpp_headers/nvme_ocssd.o 00:04:39.951 LINK ioat_perf 00:04:39.951 LINK fused_ordering 00:04:39.951 LINK verify 00:04:39.951 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:39.951 LINK hello_world 00:04:39.951 CXX test/cpp_headers/nvme_spec.o 00:04:39.951 CXX test/cpp_headers/nvme_zns.o 00:04:39.951 CXX test/cpp_headers/nvmf_cmd.o 00:04:39.951 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:39.951 CXX test/cpp_headers/nvmf.o 00:04:39.951 CXX test/cpp_headers/nvmf_spec.o 00:04:39.951 LINK bdev_svc 00:04:39.951 CXX test/cpp_headers/nvmf_transport.o 00:04:39.951 LINK cmb_copy 00:04:39.951 LINK doorbell_aers 00:04:39.951 CXX test/cpp_headers/opal.o 00:04:39.951 CXX test/cpp_headers/opal_spec.o 00:04:39.951 CXX test/cpp_headers/pci_ids.o 00:04:39.951 LINK simple_copy 00:04:39.951 CXX test/cpp_headers/queue.o 00:04:40.210 CXX test/cpp_headers/reduce.o 00:04:40.210 CXX test/cpp_headers/pipe.o 00:04:40.210 CXX test/cpp_headers/rpc.o 00:04:40.210 CXX test/cpp_headers/scheduler.o 00:04:40.210 CXX test/cpp_headers/scsi.o 00:04:40.210 CXX test/cpp_headers/scsi_spec.o 00:04:40.210 CXX test/cpp_headers/sock.o 00:04:40.210 CXX test/cpp_headers/stdinc.o 00:04:40.210 LINK scheduler 00:04:40.210 LINK reset 00:04:40.210 CXX test/cpp_headers/string.o 00:04:40.210 LINK hello_blob 00:04:40.210 LINK aer 00:04:40.210 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:40.210 CXX test/cpp_headers/thread.o 00:04:40.210 LINK nvme_dp 00:04:40.210 CXX test/cpp_headers/trace.o 00:04:40.210 LINK thread 00:04:40.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:40.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:40.210 LINK hotplug 00:04:40.210 CXX test/cpp_headers/trace_parser.o 00:04:40.210 CXX test/cpp_headers/tree.o 00:04:40.210 LINK sgl 00:04:40.210 LINK overhead 00:04:40.210 CXX test/cpp_headers/ublk.o 00:04:40.210 CXX test/cpp_headers/util.o 00:04:40.210 CXX test/cpp_headers/uuid.o 00:04:40.210 CXX test/cpp_headers/version.o 00:04:40.210 LINK spdk_trace 00:04:40.210 CXX test/cpp_headers/vfio_user_pci.o 00:04:40.210 LINK spdk_dd 00:04:40.210 LINK hello_bdev 00:04:40.210 LINK nvmf 00:04:40.210 CXX test/cpp_headers/vfio_user_spec.o 00:04:40.210 CXX test/cpp_headers/vhost.o 00:04:40.210 CXX test/cpp_headers/vmd.o 00:04:40.210 CXX test/cpp_headers/xor.o 00:04:40.210 LINK arbitration 00:04:40.210 LINK pci_ut 00:04:40.210 CXX test/cpp_headers/zipf.o 00:04:40.210 LINK nvme_compliance 00:04:40.469 LINK idxd_perf 00:04:40.469 LINK fdp 00:04:40.469 LINK test_dma 00:04:40.469 LINK reconnect 00:04:40.469 LINK bdevio 00:04:40.469 LINK abort 00:04:40.469 LINK accel_perf 00:04:40.469 LINK blobcli 00:04:40.728 LINK spdk_nvme 00:04:40.728 LINK dif 00:04:40.728 LINK spdk_bdev 00:04:40.728 LINK nvme_manage 00:04:40.728 LINK nvme_fuzz 00:04:40.728 LINK spdk_nvme_perf 00:04:40.728 LINK mem_callbacks 00:04:40.987 LINK spdk_nvme_identify 00:04:40.987 LINK vhost_fuzz 00:04:40.987 LINK bdevperf 00:04:40.987 LINK spdk_top 00:04:41.247 LINK memory_ut 00:04:41.505 LINK cuse 00:04:42.074 LINK iscsi_fuzz 00:04:45.364 LINK esnap 00:04:45.364 00:04:45.364 real 0m55.002s 00:04:45.364 user 7m43.513s 00:04:45.364 sys 4m41.085s 00:04:45.364 13:32:38 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:04:45.364 13:32:38 make -- common/autotest_common.sh@10 -- $ set +x 00:04:45.364 ************************************ 00:04:45.364 END TEST make 00:04:45.364 ************************************ 00:04:45.364 13:32:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:45.364 13:32:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:45.364 13:32:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:45.364 13:32:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.364 13:32:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:45.364 13:32:38 -- pm/common@44 -- $ pid=1112383 00:04:45.364 13:32:38 -- pm/common@50 -- $ kill -TERM 1112383 00:04:45.364 13:32:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.364 13:32:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:45.364 13:32:38 -- pm/common@44 -- $ pid=1112385 00:04:45.364 13:32:38 -- pm/common@50 -- $ kill -TERM 1112385 00:04:45.364 13:32:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.364 13:32:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:45.364 13:32:38 -- pm/common@44 -- $ pid=1112387 00:04:45.364 13:32:38 -- pm/common@50 -- $ kill -TERM 1112387 00:04:45.364 13:32:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.364 13:32:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:45.364 13:32:38 -- pm/common@44 -- $ pid=1112410 00:04:45.364 13:32:38 -- pm/common@50 -- $ sudo -E kill -TERM 1112410 00:04:45.364 13:32:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.364 13:32:38 -- nvmf/common.sh@7 -- # uname -s 00:04:45.364 13:32:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.364 13:32:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.364 13:32:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.364 13:32:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.364 13:32:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.364 13:32:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.364 13:32:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.364 13:32:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.364 13:32:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.364 13:32:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.364 13:32:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:45.364 13:32:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:45.364 13:32:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.364 13:32:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.364 13:32:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:45.364 13:32:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.364 13:32:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.364 13:32:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.364 13:32:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.364 13:32:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.364 13:32:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.364 13:32:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.364 13:32:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.364 13:32:38 -- paths/export.sh@5 -- # export PATH 00:04:45.364 13:32:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.364 13:32:38 -- nvmf/common.sh@47 -- # : 0 00:04:45.364 13:32:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:45.364 13:32:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:45.364 13:32:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.364 13:32:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.364 13:32:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.364 13:32:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:45.364 13:32:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:45.364 13:32:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:45.364 13:32:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:45.364 13:32:38 -- spdk/autotest.sh@32 -- # uname -s 00:04:45.364 13:32:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:45.364 13:32:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:45.364 13:32:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:45.364 13:32:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:45.364 13:32:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:45.364 13:32:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:45.364 13:32:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:45.364 13:32:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:45.624 13:32:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1174040 00:04:45.624 13:32:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:45.624 13:32:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:45.624 13:32:38 -- pm/common@17 -- # local monitor 00:04:45.624 13:32:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.624 13:32:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.624 13:32:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.624 13:32:38 -- pm/common@21 -- # date +%s 00:04:45.624 13:32:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.624 13:32:38 -- pm/common@21 -- # date +%s 00:04:45.624 13:32:38 -- pm/common@25 -- # sleep 1 00:04:45.624 13:32:38 -- pm/common@21 -- # date +%s 00:04:45.624 13:32:38 -- pm/common@21 -- # date +%s 00:04:45.624 13:32:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105558 00:04:45.624 13:32:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105558 00:04:45.624 13:32:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105558 00:04:45.624 13:32:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105558 00:04:45.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105558_collect-vmstat.pm.log 00:04:45.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105558_collect-bmc-pm.bmc.pm.log 00:04:45.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105558_collect-cpu-load.pm.log 00:04:45.624 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105558_collect-cpu-temp.pm.log 00:04:46.561 13:32:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:46.561 13:32:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:46.561 13:32:39 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:46.561 13:32:39 -- common/autotest_common.sh@10 -- # set +x 00:04:46.561 13:32:39 -- spdk/autotest.sh@59 -- # create_test_list 00:04:46.561 13:32:39 -- common/autotest_common.sh@747 -- # xtrace_disable 00:04:46.561 13:32:39 -- common/autotest_common.sh@10 -- # set +x 00:04:46.561 13:32:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:46.561 13:32:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.561 13:32:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.561 13:32:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:46.561 13:32:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.561 13:32:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:46.561 13:32:39 -- common/autotest_common.sh@1454 -- # uname 00:04:46.561 13:32:39 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:04:46.561 13:32:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:46.561 13:32:39 -- common/autotest_common.sh@1474 -- # uname 00:04:46.561 13:32:39 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:04:46.561 13:32:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:46.561 13:32:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:46.561 13:32:39 -- spdk/autotest.sh@72 -- # hash lcov 00:04:46.561 13:32:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:46.561 13:32:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:46.561 --rc lcov_branch_coverage=1 00:04:46.561 --rc lcov_function_coverage=1 00:04:46.561 --rc genhtml_branch_coverage=1 00:04:46.561 --rc genhtml_function_coverage=1 00:04:46.561 --rc genhtml_legend=1 00:04:46.561 --rc geninfo_all_blocks=1 00:04:46.561 ' 00:04:46.561 13:32:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:46.561 --rc lcov_branch_coverage=1 00:04:46.561 --rc lcov_function_coverage=1 00:04:46.561 --rc genhtml_branch_coverage=1 00:04:46.561 --rc genhtml_function_coverage=1 00:04:46.561 --rc genhtml_legend=1 00:04:46.561 --rc geninfo_all_blocks=1 00:04:46.561 ' 00:04:46.561 13:32:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:46.561 --rc lcov_branch_coverage=1 00:04:46.561 --rc lcov_function_coverage=1 00:04:46.561 --rc genhtml_branch_coverage=1 00:04:46.561 --rc genhtml_function_coverage=1 00:04:46.561 --rc genhtml_legend=1 00:04:46.561 --rc geninfo_all_blocks=1 00:04:46.561 --no-external' 00:04:46.561 13:32:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:46.561 --rc lcov_branch_coverage=1 00:04:46.561 --rc lcov_function_coverage=1 00:04:46.561 --rc genhtml_branch_coverage=1 00:04:46.561 --rc genhtml_function_coverage=1 00:04:46.561 --rc genhtml_legend=1 00:04:46.561 --rc geninfo_all_blocks=1 00:04:46.561 --no-external' 00:04:46.561 13:32:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:46.561 lcov: LCOV version 1.14 00:04:46.561 13:32:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:58.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:58.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:16.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:16.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:16.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:16.908 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:16.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:16.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:18.814 13:33:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:18.814 13:33:11 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.814 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:05:18.814 13:33:11 -- spdk/autotest.sh@91 -- # rm -f 00:05:18.814 13:33:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.100 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:22.100 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:22.358 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:22.358 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:22.358 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:05:22.358 13:33:15 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:22.358 13:33:15 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:22.358 13:33:15 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:22.358 13:33:15 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:22.358 13:33:15 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:22.358 13:33:15 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:22.358 13:33:15 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:22.358 13:33:15 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:22.358 13:33:15 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:22.358 13:33:15 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:22.358 13:33:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:22.358 13:33:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:22.358 13:33:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:22.358 13:33:15 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:22.358 13:33:15 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:22.358 No valid GPT data, bailing 00:05:22.358 13:33:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:22.358 13:33:15 -- scripts/common.sh@391 -- # pt= 00:05:22.358 13:33:15 -- scripts/common.sh@392 -- # return 1 00:05:22.358 13:33:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:22.358 1+0 records in 00:05:22.358 1+0 records out 00:05:22.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456158 s, 230 MB/s 00:05:22.358 13:33:15 -- spdk/autotest.sh@118 -- # sync 00:05:22.358 13:33:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:22.358 13:33:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:22.358 13:33:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:28.927 13:33:21 -- spdk/autotest.sh@124 -- # uname -s 00:05:28.927 13:33:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:28.927 13:33:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:28.927 13:33:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.927 13:33:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.927 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.927 ************************************ 00:05:28.927 START TEST setup.sh 00:05:28.927 ************************************ 00:05:28.927 13:33:21 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:28.927 * Looking for test storage... 00:05:28.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:28.927 13:33:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:28.927 13:33:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:28.927 13:33:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:28.927 13:33:21 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.927 13:33:21 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.927 13:33:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:28.927 ************************************ 00:05:28.927 START TEST acl 00:05:28.927 ************************************ 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:28.927 * Looking for test storage... 00:05:28.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.927 13:33:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:28.927 13:33:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:28.927 13:33:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.927 13:33:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.120 13:33:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:33.120 13:33:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:33.120 13:33:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:33.120 13:33:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:33.120 13:33:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.120 13:33:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.411 Hugepages 00:05:36.411 node hugesize free / total 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 00:05:36.411 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.411 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:36.412 13:33:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:36.412 13:33:29 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:36.412 13:33:29 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.412 13:33:29 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.412 13:33:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:36.412 ************************************ 00:05:36.412 START TEST denied 00:05:36.412 ************************************ 00:05:36.412 13:33:29 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:05:36.412 13:33:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:05:36.412 13:33:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:36.412 13:33:29 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:05:36.412 13:33:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.412 13:33:29 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.725 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.725 13:33:32 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:44.999 00:05:44.999 real 0m8.193s 00:05:44.999 user 0m2.474s 00:05:44.999 sys 0m4.933s 00:05:44.999 13:33:37 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.999 13:33:37 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:44.999 ************************************ 00:05:44.999 END TEST denied 00:05:44.999 ************************************ 00:05:44.999 13:33:37 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:44.999 13:33:37 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.999 13:33:37 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.999 13:33:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:44.999 ************************************ 00:05:44.999 START TEST allowed 00:05:44.999 ************************************ 00:05:44.999 13:33:37 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:05:44.999 13:33:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:05:44.999 13:33:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:44.999 13:33:37 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:05:44.999 13:33:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.999 13:33:37 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:50.339 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:50.339 13:33:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:50.339 13:33:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:50.339 13:33:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:50.339 13:33:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:50.339 13:33:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:53.630 00:05:53.630 real 0m8.812s 00:05:53.630 user 0m2.498s 00:05:53.630 sys 0m4.886s 00:05:53.630 13:33:46 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.630 13:33:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:53.630 ************************************ 00:05:53.630 END TEST allowed 00:05:53.630 ************************************ 00:05:53.630 00:05:53.630 real 0m24.786s 00:05:53.630 user 0m7.754s 00:05:53.630 sys 0m15.117s 00:05:53.630 13:33:46 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.630 13:33:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:53.630 ************************************ 00:05:53.630 END TEST acl 00:05:53.630 ************************************ 00:05:53.630 13:33:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:53.630 13:33:46 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.630 13:33:46 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.630 13:33:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:53.630 ************************************ 00:05:53.630 START TEST hugepages 00:05:53.630 ************************************ 00:05:53.630 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:53.630 * Looking for test storage... 00:05:53.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41769476 kB' 'MemAvailable: 45672324 kB' 'Buffers: 2704 kB' 'Cached: 10288868 kB' 'SwapCached: 0 kB' 'Active: 7112224 kB' 'Inactive: 3674932 kB' 'Active(anon): 6717668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498984 kB' 'Mapped: 201212 kB' 'Shmem: 6222084 kB' 'KReclaimable: 481068 kB' 'Slab: 1111104 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 630036 kB' 'KernelStack: 22208 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8111092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216436 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.630 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:53.631 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:53.632 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:53.632 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.632 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.632 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:53.632 ************************************ 00:05:53.632 START TEST default_setup 00:05:53.632 ************************************ 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.632 13:33:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:56.921 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:56.921 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:56.921 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:56.921 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:57.180 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:59.092 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43926628 kB' 'MemAvailable: 47829476 kB' 'Buffers: 2704 kB' 'Cached: 10288992 kB' 'SwapCached: 0 kB' 'Active: 7130536 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735980 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517180 kB' 'Mapped: 201224 kB' 'Shmem: 6222208 kB' 'KReclaimable: 481068 kB' 'Slab: 1109756 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628688 kB' 'KernelStack: 22336 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8130904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.092 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.093 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43928656 kB' 'MemAvailable: 47831504 kB' 'Buffers: 2704 kB' 'Cached: 10288992 kB' 'SwapCached: 0 kB' 'Active: 7130012 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735456 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516536 kB' 'Mapped: 201188 kB' 'Shmem: 6222208 kB' 'KReclaimable: 481068 kB' 'Slab: 1109756 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628688 kB' 'KernelStack: 22304 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8130924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216580 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.094 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.095 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43928044 kB' 'MemAvailable: 47830892 kB' 'Buffers: 2704 kB' 'Cached: 10288992 kB' 'SwapCached: 0 kB' 'Active: 7129732 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735176 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516248 kB' 'Mapped: 201188 kB' 'Shmem: 6222208 kB' 'KReclaimable: 481068 kB' 'Slab: 1109776 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628708 kB' 'KernelStack: 22208 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8131080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.096 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.097 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:59.098 nr_hugepages=1024 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:59.098 resv_hugepages=0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:59.098 surplus_hugepages=0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:59.098 anon_hugepages=0 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43927732 kB' 'MemAvailable: 47830580 kB' 'Buffers: 2704 kB' 'Cached: 10289044 kB' 'SwapCached: 0 kB' 'Active: 7130272 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735716 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516224 kB' 'Mapped: 201188 kB' 'Shmem: 6222260 kB' 'KReclaimable: 481068 kB' 'Slab: 1109776 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628708 kB' 'KernelStack: 22272 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8131468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.098 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.099 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26964108 kB' 'MemUsed: 5675032 kB' 'SwapCached: 0 kB' 'Active: 2370948 kB' 'Inactive: 181896 kB' 'Active(anon): 2103084 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185248 kB' 'Mapped: 150512 kB' 'AnonPages: 370752 kB' 'Shmem: 1735488 kB' 'KernelStack: 13128 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 405864 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 255848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.100 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:59.101 node0=1024 expecting 1024 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:59.101 00:05:59.101 real 0m5.372s 00:05:59.101 user 0m1.445s 00:05:59.101 sys 0m2.420s 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.101 13:33:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:59.101 ************************************ 00:05:59.101 END TEST default_setup 00:05:59.101 ************************************ 00:05:59.101 13:33:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:59.101 13:33:51 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.101 13:33:51 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.101 13:33:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:59.101 ************************************ 00:05:59.101 START TEST per_node_1G_alloc 00:05:59.101 ************************************ 00:05:59.101 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:05:59.101 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:59.101 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:59.101 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.102 13:33:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:02.394 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:02.394 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.659 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43968180 kB' 'MemAvailable: 47871028 kB' 'Buffers: 2704 kB' 'Cached: 10289152 kB' 'SwapCached: 0 kB' 'Active: 7128136 kB' 'Inactive: 3674932 kB' 'Active(anon): 6733580 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514428 kB' 'Mapped: 200736 kB' 'Shmem: 6222368 kB' 'KReclaimable: 481068 kB' 'Slab: 1108660 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627592 kB' 'KernelStack: 22096 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8119204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.660 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43967244 kB' 'MemAvailable: 47870092 kB' 'Buffers: 2704 kB' 'Cached: 10289152 kB' 'SwapCached: 0 kB' 'Active: 7133124 kB' 'Inactive: 3674932 kB' 'Active(anon): 6738568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519396 kB' 'Mapped: 200736 kB' 'Shmem: 6222368 kB' 'KReclaimable: 481068 kB' 'Slab: 1108624 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627556 kB' 'KernelStack: 22144 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8123180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216676 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.661 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.662 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43968844 kB' 'MemAvailable: 47871692 kB' 'Buffers: 2704 kB' 'Cached: 10289172 kB' 'SwapCached: 0 kB' 'Active: 7128872 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515144 kB' 'Mapped: 201148 kB' 'Shmem: 6222388 kB' 'KReclaimable: 481068 kB' 'Slab: 1108624 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627556 kB' 'KernelStack: 22160 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8119244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.663 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.664 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.665 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:02.666 nr_hugepages=1024 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:02.666 resv_hugepages=0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:02.666 surplus_hugepages=0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:02.666 anon_hugepages=0 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43966272 kB' 'MemAvailable: 47869120 kB' 'Buffers: 2704 kB' 'Cached: 10289176 kB' 'SwapCached: 0 kB' 'Active: 7131480 kB' 'Inactive: 3674932 kB' 'Active(anon): 6736924 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517756 kB' 'Mapped: 200736 kB' 'Shmem: 6222392 kB' 'KReclaimable: 481068 kB' 'Slab: 1108624 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627556 kB' 'KernelStack: 22144 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8122172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.666 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.667 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28036992 kB' 'MemUsed: 4602148 kB' 'SwapCached: 0 kB' 'Active: 2370504 kB' 'Inactive: 181896 kB' 'Active(anon): 2102640 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185324 kB' 'Mapped: 149536 kB' 'AnonPages: 370204 kB' 'Shmem: 1735564 kB' 'KernelStack: 13144 kB' 'PageTables: 5796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 404944 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 254928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.668 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.669 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15929952 kB' 'MemUsed: 11726128 kB' 'SwapCached: 0 kB' 'Active: 4757896 kB' 'Inactive: 3493036 kB' 'Active(anon): 4631204 kB' 'Inactive(anon): 0 kB' 'Active(file): 126692 kB' 'Inactive(file): 3493036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8106620 kB' 'Mapped: 50696 kB' 'AnonPages: 144400 kB' 'Shmem: 4486892 kB' 'KernelStack: 8984 kB' 'PageTables: 2560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331052 kB' 'Slab: 703680 kB' 'SReclaimable: 331052 kB' 'SUnreclaim: 372628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.670 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.671 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:02.931 node0=512 expecting 512 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:02.931 node1=512 expecting 512 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:02.931 00:06:02.931 real 0m3.649s 00:06:02.931 user 0m1.364s 00:06:02.931 sys 0m2.353s 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.931 13:33:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 END TEST per_node_1G_alloc 00:06:02.931 ************************************ 00:06:02.931 13:33:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:02.931 13:33:55 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.931 13:33:55 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.931 13:33:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 START TEST even_2G_alloc 00:06:02.931 ************************************ 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:02.931 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.932 13:33:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:06.223 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:06.223 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43968700 kB' 'MemAvailable: 47871548 kB' 'Buffers: 2704 kB' 'Cached: 10289320 kB' 'SwapCached: 0 kB' 'Active: 7129192 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514896 kB' 'Mapped: 200420 kB' 'Shmem: 6222536 kB' 'KReclaimable: 481068 kB' 'Slab: 1108572 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627504 kB' 'KernelStack: 22112 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8118828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216596 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.489 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.490 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43969072 kB' 'MemAvailable: 47871920 kB' 'Buffers: 2704 kB' 'Cached: 10289324 kB' 'SwapCached: 0 kB' 'Active: 7128904 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734348 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515092 kB' 'Mapped: 200248 kB' 'Shmem: 6222540 kB' 'KReclaimable: 481068 kB' 'Slab: 1108548 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627480 kB' 'KernelStack: 22096 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8118844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216564 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.491 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.492 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43969072 kB' 'MemAvailable: 47871920 kB' 'Buffers: 2704 kB' 'Cached: 10289360 kB' 'SwapCached: 0 kB' 'Active: 7128544 kB' 'Inactive: 3674932 kB' 'Active(anon): 6733988 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514688 kB' 'Mapped: 200248 kB' 'Shmem: 6222576 kB' 'KReclaimable: 481068 kB' 'Slab: 1108548 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627480 kB' 'KernelStack: 22080 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8118868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216580 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.493 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.494 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:06.495 nr_hugepages=1024 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.495 resv_hugepages=0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.495 surplus_hugepages=0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.495 anon_hugepages=0 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43969032 kB' 'MemAvailable: 47871880 kB' 'Buffers: 2704 kB' 'Cached: 10289380 kB' 'SwapCached: 0 kB' 'Active: 7128604 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514724 kB' 'Mapped: 200248 kB' 'Shmem: 6222596 kB' 'KReclaimable: 481068 kB' 'Slab: 1108548 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627480 kB' 'KernelStack: 22096 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8118888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216580 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.495 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.496 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28043544 kB' 'MemUsed: 4595596 kB' 'SwapCached: 0 kB' 'Active: 2372148 kB' 'Inactive: 181896 kB' 'Active(anon): 2104284 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185364 kB' 'Mapped: 149552 kB' 'AnonPages: 371896 kB' 'Shmem: 1735604 kB' 'KernelStack: 13128 kB' 'PageTables: 5820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 404860 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 254844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.497 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.498 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15926416 kB' 'MemUsed: 11729664 kB' 'SwapCached: 0 kB' 'Active: 4756460 kB' 'Inactive: 3493036 kB' 'Active(anon): 4629768 kB' 'Inactive(anon): 0 kB' 'Active(file): 126692 kB' 'Inactive(file): 3493036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8106756 kB' 'Mapped: 50696 kB' 'AnonPages: 142796 kB' 'Shmem: 4487028 kB' 'KernelStack: 8952 kB' 'PageTables: 2516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331052 kB' 'Slab: 703688 kB' 'SReclaimable: 331052 kB' 'SUnreclaim: 372636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.499 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:06.500 node0=512 expecting 512 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:06.500 node1=512 expecting 512 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:06.500 00:06:06.500 real 0m3.737s 00:06:06.500 user 0m1.415s 00:06:06.500 sys 0m2.388s 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.500 13:33:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:06.500 ************************************ 00:06:06.500 END TEST even_2G_alloc 00:06:06.500 ************************************ 00:06:06.818 13:33:59 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:06.818 13:33:59 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.818 13:33:59 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.818 13:33:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:06.818 ************************************ 00:06:06.818 START TEST odd_alloc 00:06:06.818 ************************************ 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.818 13:33:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:10.112 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:10.112 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.112 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43980320 kB' 'MemAvailable: 47883168 kB' 'Buffers: 2704 kB' 'Cached: 10289484 kB' 'SwapCached: 0 kB' 'Active: 7130120 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735564 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516220 kB' 'Mapped: 200372 kB' 'Shmem: 6222700 kB' 'KReclaimable: 481068 kB' 'Slab: 1108884 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627816 kB' 'KernelStack: 22256 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8122004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.113 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.114 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43983004 kB' 'MemAvailable: 47885852 kB' 'Buffers: 2704 kB' 'Cached: 10289484 kB' 'SwapCached: 0 kB' 'Active: 7129268 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734712 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516264 kB' 'Mapped: 200324 kB' 'Shmem: 6222700 kB' 'KReclaimable: 481068 kB' 'Slab: 1108852 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627784 kB' 'KernelStack: 22304 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8120524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.379 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.380 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43982680 kB' 'MemAvailable: 47885528 kB' 'Buffers: 2704 kB' 'Cached: 10289504 kB' 'SwapCached: 0 kB' 'Active: 7128936 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514852 kB' 'Mapped: 200264 kB' 'Shmem: 6222720 kB' 'KReclaimable: 481068 kB' 'Slab: 1109044 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627976 kB' 'KernelStack: 22144 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8122160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.381 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.382 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:10.383 nr_hugepages=1025 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:10.383 resv_hugepages=0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:10.383 surplus_hugepages=0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:10.383 anon_hugepages=0 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43982180 kB' 'MemAvailable: 47885028 kB' 'Buffers: 2704 kB' 'Cached: 10289524 kB' 'SwapCached: 0 kB' 'Active: 7129288 kB' 'Inactive: 3674932 kB' 'Active(anon): 6734732 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515204 kB' 'Mapped: 200264 kB' 'Shmem: 6222740 kB' 'KReclaimable: 481068 kB' 'Slab: 1109024 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 627956 kB' 'KernelStack: 22192 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8122316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216724 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.383 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.384 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28045344 kB' 'MemUsed: 4593796 kB' 'SwapCached: 0 kB' 'Active: 2372392 kB' 'Inactive: 181896 kB' 'Active(anon): 2104528 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185420 kB' 'Mapped: 149568 kB' 'AnonPages: 372004 kB' 'Shmem: 1735660 kB' 'KernelStack: 13144 kB' 'PageTables: 5852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 405368 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 255352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.385 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15938844 kB' 'MemUsed: 11717236 kB' 'SwapCached: 0 kB' 'Active: 4756968 kB' 'Inactive: 3493036 kB' 'Active(anon): 4630276 kB' 'Inactive(anon): 0 kB' 'Active(file): 126692 kB' 'Inactive(file): 3493036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8106860 kB' 'Mapped: 50696 kB' 'AnonPages: 143292 kB' 'Shmem: 4487132 kB' 'KernelStack: 9000 kB' 'PageTables: 2688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331052 kB' 'Slab: 703816 kB' 'SReclaimable: 331052 kB' 'SUnreclaim: 372764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.386 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.387 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:10.388 node0=512 expecting 513 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:10.388 node1=513 expecting 512 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:10.388 00:06:10.388 real 0m3.704s 00:06:10.388 user 0m1.360s 00:06:10.388 sys 0m2.412s 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.388 13:34:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:10.388 ************************************ 00:06:10.388 END TEST odd_alloc 00:06:10.388 ************************************ 00:06:10.388 13:34:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:10.388 13:34:03 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:10.388 13:34:03 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.388 13:34:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:10.388 ************************************ 00:06:10.388 START TEST custom_alloc 00:06:10.388 ************************************ 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:10.388 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:10.389 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:10.389 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:10.389 13:34:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:10.389 13:34:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.389 13:34:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:13.681 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:13.944 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42926164 kB' 'MemAvailable: 46829012 kB' 'Buffers: 2704 kB' 'Cached: 10289652 kB' 'SwapCached: 0 kB' 'Active: 7130456 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515752 kB' 'Mapped: 200360 kB' 'Shmem: 6222868 kB' 'KReclaimable: 481068 kB' 'Slab: 1109888 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628820 kB' 'KernelStack: 22080 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8120568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216644 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.944 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.945 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42927184 kB' 'MemAvailable: 46830032 kB' 'Buffers: 2704 kB' 'Cached: 10289656 kB' 'SwapCached: 0 kB' 'Active: 7129656 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515476 kB' 'Mapped: 200276 kB' 'Shmem: 6222872 kB' 'KReclaimable: 481068 kB' 'Slab: 1109884 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628816 kB' 'KernelStack: 22080 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8120584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.946 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42927688 kB' 'MemAvailable: 46830536 kB' 'Buffers: 2704 kB' 'Cached: 10289656 kB' 'SwapCached: 0 kB' 'Active: 7129656 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515476 kB' 'Mapped: 200276 kB' 'Shmem: 6222872 kB' 'KReclaimable: 481068 kB' 'Slab: 1109884 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628816 kB' 'KernelStack: 22080 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8120604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.947 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:13.948 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.212 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.212 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.212 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.212 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.212 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.213 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:14.214 nr_hugepages=1536 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:14.214 resv_hugepages=0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:14.214 surplus_hugepages=0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:14.214 anon_hugepages=0 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42929312 kB' 'MemAvailable: 46832160 kB' 'Buffers: 2704 kB' 'Cached: 10289696 kB' 'SwapCached: 0 kB' 'Active: 7129736 kB' 'Inactive: 3674932 kB' 'Active(anon): 6735180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515476 kB' 'Mapped: 200276 kB' 'Shmem: 6222912 kB' 'KReclaimable: 481068 kB' 'Slab: 1109860 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628792 kB' 'KernelStack: 22080 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8120628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.214 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:14.215 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 28032308 kB' 'MemUsed: 4606832 kB' 'SwapCached: 0 kB' 'Active: 2373148 kB' 'Inactive: 181896 kB' 'Active(anon): 2105284 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185460 kB' 'Mapped: 149580 kB' 'AnonPages: 372756 kB' 'Shmem: 1735700 kB' 'KernelStack: 13112 kB' 'PageTables: 5824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 405932 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 255916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.216 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 14898296 kB' 'MemUsed: 12757784 kB' 'SwapCached: 0 kB' 'Active: 4756452 kB' 'Inactive: 3493036 kB' 'Active(anon): 4629760 kB' 'Inactive(anon): 0 kB' 'Active(file): 126692 kB' 'Inactive(file): 3493036 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8106984 kB' 'Mapped: 50696 kB' 'AnonPages: 142540 kB' 'Shmem: 4487256 kB' 'KernelStack: 8952 kB' 'PageTables: 2516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331052 kB' 'Slab: 703928 kB' 'SReclaimable: 331052 kB' 'SUnreclaim: 372876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.217 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:14.218 node0=512 expecting 512 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:14.218 node1=1024 expecting 1024 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:14.218 00:06:14.218 real 0m3.685s 00:06:14.218 user 0m1.373s 00:06:14.218 sys 0m2.381s 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.218 13:34:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:14.218 ************************************ 00:06:14.218 END TEST custom_alloc 00:06:14.218 ************************************ 00:06:14.218 13:34:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:14.218 13:34:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.218 13:34:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.218 13:34:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:14.218 ************************************ 00:06:14.218 START TEST no_shrink_alloc 00:06:14.218 ************************************ 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:14.218 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:14.219 13:34:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:17.511 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:17.511 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:17.775 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:17.775 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:17.775 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:17.775 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:17.775 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43941396 kB' 'MemAvailable: 47844244 kB' 'Buffers: 2704 kB' 'Cached: 10289816 kB' 'SwapCached: 0 kB' 'Active: 7133376 kB' 'Inactive: 3674932 kB' 'Active(anon): 6738820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519016 kB' 'Mapped: 200816 kB' 'Shmem: 6223032 kB' 'KReclaimable: 481068 kB' 'Slab: 1109796 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628728 kB' 'KernelStack: 22176 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8125324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216612 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.775 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43938328 kB' 'MemAvailable: 47841176 kB' 'Buffers: 2704 kB' 'Cached: 10289820 kB' 'SwapCached: 0 kB' 'Active: 7136752 kB' 'Inactive: 3674932 kB' 'Active(anon): 6742196 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522400 kB' 'Mapped: 200792 kB' 'Shmem: 6223036 kB' 'KReclaimable: 481068 kB' 'Slab: 1109836 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628768 kB' 'KernelStack: 22224 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8128788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216600 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:17.776 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.777 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43936024 kB' 'MemAvailable: 47838872 kB' 'Buffers: 2704 kB' 'Cached: 10289836 kB' 'SwapCached: 0 kB' 'Active: 7131492 kB' 'Inactive: 3674932 kB' 'Active(anon): 6736936 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517088 kB' 'Mapped: 200288 kB' 'Shmem: 6223052 kB' 'KReclaimable: 481068 kB' 'Slab: 1109836 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628768 kB' 'KernelStack: 22256 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8124304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216660 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.778 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.779 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.780 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:17.781 nr_hugepages=1024 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:17.781 resv_hugepages=0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:17.781 surplus_hugepages=0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:17.781 anon_hugepages=0 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43936892 kB' 'MemAvailable: 47839740 kB' 'Buffers: 2704 kB' 'Cached: 10289860 kB' 'SwapCached: 0 kB' 'Active: 7131428 kB' 'Inactive: 3674932 kB' 'Active(anon): 6736872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517008 kB' 'Mapped: 200288 kB' 'Shmem: 6223076 kB' 'KReclaimable: 481068 kB' 'Slab: 1109828 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 628760 kB' 'KernelStack: 22208 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8124328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216708 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.781 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.782 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26979240 kB' 'MemUsed: 5659900 kB' 'SwapCached: 0 kB' 'Active: 2374472 kB' 'Inactive: 181896 kB' 'Active(anon): 2106608 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185548 kB' 'Mapped: 149592 kB' 'AnonPages: 374040 kB' 'Shmem: 1735788 kB' 'KernelStack: 13144 kB' 'PageTables: 5940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 405972 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 255956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.043 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:18.044 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:18.045 node0=1024 expecting 1024 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.045 13:34:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:21.345 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:21.345 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:21.345 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43934508 kB' 'MemAvailable: 47837356 kB' 'Buffers: 2704 kB' 'Cached: 10289956 kB' 'SwapCached: 0 kB' 'Active: 7132384 kB' 'Inactive: 3674932 kB' 'Active(anon): 6737828 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517392 kB' 'Mapped: 200392 kB' 'Shmem: 6223172 kB' 'KReclaimable: 481068 kB' 'Slab: 1110168 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 629100 kB' 'KernelStack: 22208 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8122248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216692 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.345 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.346 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43936044 kB' 'MemAvailable: 47838892 kB' 'Buffers: 2704 kB' 'Cached: 10289960 kB' 'SwapCached: 0 kB' 'Active: 7131936 kB' 'Inactive: 3674932 kB' 'Active(anon): 6737380 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517000 kB' 'Mapped: 200372 kB' 'Shmem: 6223176 kB' 'KReclaimable: 481068 kB' 'Slab: 1110140 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 629072 kB' 'KernelStack: 22128 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8122268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.347 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:21.348 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43936928 kB' 'MemAvailable: 47839776 kB' 'Buffers: 2704 kB' 'Cached: 10289976 kB' 'SwapCached: 0 kB' 'Active: 7131484 kB' 'Inactive: 3674932 kB' 'Active(anon): 6736928 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517020 kB' 'Mapped: 200296 kB' 'Shmem: 6223192 kB' 'KReclaimable: 481068 kB' 'Slab: 1110140 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 629072 kB' 'KernelStack: 22144 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8122288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.349 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.350 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:21.351 nr_hugepages=1024 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:21.351 resv_hugepages=0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:21.351 surplus_hugepages=0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:21.351 anon_hugepages=0 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43936984 kB' 'MemAvailable: 47839832 kB' 'Buffers: 2704 kB' 'Cached: 10290016 kB' 'SwapCached: 0 kB' 'Active: 7131140 kB' 'Inactive: 3674932 kB' 'Active(anon): 6736584 kB' 'Inactive(anon): 0 kB' 'Active(file): 394556 kB' 'Inactive(file): 3674932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516624 kB' 'Mapped: 200296 kB' 'Shmem: 6223232 kB' 'KReclaimable: 481068 kB' 'Slab: 1110140 kB' 'SReclaimable: 481068 kB' 'SUnreclaim: 629072 kB' 'KernelStack: 22128 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8122312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216628 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3057012 kB' 'DirectMap2M: 16551936 kB' 'DirectMap1G: 49283072 kB' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.351 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:21.352 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:21.353 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26963600 kB' 'MemUsed: 5675540 kB' 'SwapCached: 0 kB' 'Active: 2374736 kB' 'Inactive: 181896 kB' 'Active(anon): 2106872 kB' 'Inactive(anon): 0 kB' 'Active(file): 267864 kB' 'Inactive(file): 181896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2185688 kB' 'Mapped: 149600 kB' 'AnonPages: 374188 kB' 'Shmem: 1735928 kB' 'KernelStack: 13160 kB' 'PageTables: 5884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 150016 kB' 'Slab: 406064 kB' 'SReclaimable: 150016 kB' 'SUnreclaim: 256048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.614 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:21.615 node0=1024 expecting 1024 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:21.615 00:06:21.615 real 0m7.237s 00:06:21.615 user 0m2.641s 00:06:21.615 sys 0m4.699s 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.615 13:34:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:21.615 ************************************ 00:06:21.615 END TEST no_shrink_alloc 00:06:21.615 ************************************ 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:21.615 13:34:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:21.615 00:06:21.615 real 0m28.069s 00:06:21.615 user 0m9.857s 00:06:21.615 sys 0m17.132s 00:06:21.615 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.615 13:34:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:21.615 ************************************ 00:06:21.615 END TEST hugepages 00:06:21.615 ************************************ 00:06:21.615 13:34:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:21.615 13:34:14 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:21.615 13:34:14 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.615 13:34:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:21.615 ************************************ 00:06:21.615 START TEST driver 00:06:21.615 ************************************ 00:06:21.615 13:34:14 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:21.615 * Looking for test storage... 00:06:21.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:21.874 13:34:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:21.874 13:34:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:21.874 13:34:14 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:27.149 13:34:19 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:27.149 13:34:19 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:27.149 13:34:19 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.149 13:34:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:27.149 ************************************ 00:06:27.149 START TEST guess_driver 00:06:27.149 ************************************ 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:27.149 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:27.149 Looking for driver=vfio-pci 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:27.149 13:34:19 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.433 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:30.434 13:34:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:31.809 13:34:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:37.139 00:06:37.139 real 0m9.942s 00:06:37.139 user 0m2.600s 00:06:37.139 sys 0m4.927s 00:06:37.139 13:34:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.139 13:34:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:37.139 ************************************ 00:06:37.139 END TEST guess_driver 00:06:37.139 ************************************ 00:06:37.139 00:06:37.139 real 0m14.893s 00:06:37.139 user 0m3.985s 00:06:37.139 sys 0m7.640s 00:06:37.139 13:34:29 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.139 13:34:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:37.139 ************************************ 00:06:37.139 END TEST driver 00:06:37.139 ************************************ 00:06:37.139 13:34:29 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:37.139 13:34:29 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:37.139 13:34:29 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.139 13:34:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:37.139 ************************************ 00:06:37.139 START TEST devices 00:06:37.139 ************************************ 00:06:37.139 13:34:29 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:37.139 * Looking for test storage... 00:06:37.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:37.139 13:34:29 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:37.139 13:34:29 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:37.139 13:34:29 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:37.139 13:34:29 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:40.426 13:34:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:06:40.426 13:34:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:40.427 13:34:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:40.427 13:34:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:40.427 No valid GPT data, bailing 00:06:40.427 13:34:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:40.427 13:34:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:40.427 13:34:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:40.427 13:34:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:40.427 13:34:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:40.427 13:34:33 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:40.427 13:34:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:40.427 13:34:33 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:40.427 13:34:33 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.427 13:34:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:40.685 ************************************ 00:06:40.685 START TEST nvme_mount 00:06:40.685 ************************************ 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:40.685 13:34:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:41.620 Creating new GPT entries in memory. 00:06:41.620 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:41.620 other utilities. 00:06:41.620 13:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:41.620 13:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:41.620 13:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:41.620 13:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:41.620 13:34:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:42.557 Creating new GPT entries in memory. 00:06:42.557 The operation has completed successfully. 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1210817 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:42.557 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:42.815 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.816 13:34:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.348 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.349 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:45.608 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:45.608 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:45.867 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:45.868 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:06:45.868 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:45.868 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:45.868 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.126 13:34:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.419 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:49.420 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:49.421 13:34:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:52.712 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:52.712 00:06:52.712 real 0m12.070s 00:06:52.712 user 0m3.258s 00:06:52.712 sys 0m6.652s 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.712 13:34:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:52.712 ************************************ 00:06:52.712 END TEST nvme_mount 00:06:52.712 ************************************ 00:06:52.712 13:34:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:52.712 13:34:45 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:52.712 13:34:45 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.712 13:34:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:52.712 ************************************ 00:06:52.712 START TEST dm_mount 00:06:52.712 ************************************ 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:52.712 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:52.713 13:34:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:53.650 Creating new GPT entries in memory. 00:06:53.650 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:53.650 other utilities. 00:06:53.650 13:34:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:53.650 13:34:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:53.650 13:34:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:53.650 13:34:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:53.650 13:34:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:55.028 Creating new GPT entries in memory. 00:06:55.028 The operation has completed successfully. 00:06:55.028 13:34:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:55.028 13:34:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:55.028 13:34:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:55.028 13:34:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:55.028 13:34:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:55.966 The operation has completed successfully. 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1215233 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:55.966 13:34:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:59.254 13:34:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:59.254 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:59.254 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:06:59.254 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:59.254 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:59.254 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:59.255 13:34:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.607 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:02.608 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:02.608 00:07:02.608 real 0m9.880s 00:07:02.608 user 0m2.319s 00:07:02.608 sys 0m4.645s 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.608 13:34:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:02.608 ************************************ 00:07:02.608 END TEST dm_mount 00:07:02.608 ************************************ 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:02.608 13:34:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:02.866 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:02.866 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:07:02.866 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:02.866 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:02.866 13:34:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:02.866 00:07:02.866 real 0m26.349s 00:07:02.866 user 0m7.072s 00:07:02.866 sys 0m14.076s 00:07:02.866 13:34:55 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.866 13:34:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 ************************************ 00:07:02.866 END TEST devices 00:07:02.866 ************************************ 00:07:03.125 00:07:03.125 real 1m34.539s 00:07:03.125 user 0m28.810s 00:07:03.125 sys 0m54.304s 00:07:03.125 13:34:55 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.125 13:34:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:03.125 ************************************ 00:07:03.125 END TEST setup.sh 00:07:03.125 ************************************ 00:07:03.125 13:34:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:06.413 Hugepages 00:07:06.413 node hugesize free / total 00:07:06.413 node0 1048576kB 0 / 0 00:07:06.413 node0 2048kB 2048 / 2048 00:07:06.413 node1 1048576kB 0 / 0 00:07:06.413 node1 2048kB 0 / 0 00:07:06.413 00:07:06.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:06.413 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:06.413 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:06.672 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:06.672 13:34:59 -- spdk/autotest.sh@130 -- # uname -s 00:07:06.672 13:34:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:06.672 13:34:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:06.672 13:34:59 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:09.960 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:09.960 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:10.218 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:10.218 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:10.218 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:10.218 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:10.218 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:12.120 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:07:12.120 13:35:04 -- common/autotest_common.sh@1531 -- # sleep 1 00:07:13.060 13:35:05 -- common/autotest_common.sh@1532 -- # bdfs=() 00:07:13.060 13:35:05 -- common/autotest_common.sh@1532 -- # local bdfs 00:07:13.060 13:35:05 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:07:13.060 13:35:05 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:07:13.060 13:35:05 -- common/autotest_common.sh@1512 -- # bdfs=() 00:07:13.060 13:35:05 -- common/autotest_common.sh@1512 -- # local bdfs 00:07:13.060 13:35:05 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:13.060 13:35:05 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:13.060 13:35:05 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:07:13.060 13:35:05 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:07:13.060 13:35:05 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:07:13.060 13:35:05 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:16.346 Waiting for block devices as requested 00:07:16.346 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:16.346 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:16.608 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:16.608 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:16.608 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:16.866 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:16.866 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:16.866 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:17.125 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:17.126 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:17.126 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:17.385 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:17.385 13:35:10 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:07:17.385 13:35:10 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:07:17.385 13:35:10 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:07:17.385 13:35:10 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:07:17.385 13:35:10 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1544 -- # grep oacs 00:07:17.385 13:35:10 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:07:17.385 13:35:10 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:07:17.385 13:35:10 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:07:17.385 13:35:10 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:07:17.385 13:35:10 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:07:17.385 13:35:10 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:07:17.385 13:35:10 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:07:17.385 13:35:10 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:07:17.385 13:35:10 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:07:17.385 13:35:10 -- common/autotest_common.sh@1556 -- # continue 00:07:17.385 13:35:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:17.385 13:35:10 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:17.385 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:17.643 13:35:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:17.643 13:35:10 -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:17.643 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:17.643 13:35:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:20.928 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:20.928 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:22.307 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:07:22.307 13:35:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:22.307 13:35:15 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:22.307 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.566 13:35:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:22.566 13:35:15 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:07:22.566 13:35:15 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:07:22.566 13:35:15 -- common/autotest_common.sh@1576 -- # bdfs=() 00:07:22.566 13:35:15 -- common/autotest_common.sh@1576 -- # local bdfs 00:07:22.566 13:35:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:07:22.566 13:35:15 -- common/autotest_common.sh@1512 -- # bdfs=() 00:07:22.566 13:35:15 -- common/autotest_common.sh@1512 -- # local bdfs 00:07:22.566 13:35:15 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:22.566 13:35:15 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:22.566 13:35:15 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:07:22.566 13:35:15 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:07:22.566 13:35:15 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:07:22.566 13:35:15 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:07:22.566 13:35:15 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:07:22.566 13:35:15 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:07:22.566 13:35:15 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:22.566 13:35:15 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:07:22.566 13:35:15 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:07:22.567 13:35:15 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:07:22.567 13:35:15 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1224776 00:07:22.567 13:35:15 -- common/autotest_common.sh@1597 -- # waitforlisten 1224776 00:07:22.567 13:35:15 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.567 13:35:15 -- common/autotest_common.sh@830 -- # '[' -z 1224776 ']' 00:07:22.567 13:35:15 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.567 13:35:15 -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:22.567 13:35:15 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.567 13:35:15 -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:22.567 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 [2024-06-11 13:35:15.415629] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:22.567 [2024-06-11 13:35:15.415694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224776 ] 00:07:22.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.825 [2024-06-11 13:35:15.519892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.825 [2024-06-11 13:35:15.608774] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.761 13:35:16 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:23.761 13:35:16 -- common/autotest_common.sh@863 -- # return 0 00:07:23.761 13:35:16 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:07:23.761 13:35:16 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:07:23.761 13:35:16 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:07:27.104 nvme0n1 00:07:27.104 13:35:19 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:27.104 [2024-06-11 13:35:19.598766] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:27.104 request: 00:07:27.104 { 00:07:27.104 "nvme_ctrlr_name": "nvme0", 00:07:27.104 "password": "test", 00:07:27.104 "method": "bdev_nvme_opal_revert", 00:07:27.104 "req_id": 1 00:07:27.104 } 00:07:27.104 Got JSON-RPC error response 00:07:27.104 response: 00:07:27.104 { 00:07:27.104 "code": -32602, 00:07:27.104 "message": "Invalid parameters" 00:07:27.104 } 00:07:27.104 13:35:19 -- common/autotest_common.sh@1603 -- # true 00:07:27.104 13:35:19 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:07:27.104 13:35:19 -- common/autotest_common.sh@1607 -- # killprocess 1224776 00:07:27.104 13:35:19 -- common/autotest_common.sh@949 -- # '[' -z 1224776 ']' 00:07:27.104 13:35:19 -- common/autotest_common.sh@953 -- # kill -0 1224776 00:07:27.104 13:35:19 -- common/autotest_common.sh@954 -- # uname 00:07:27.104 13:35:19 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:27.104 13:35:19 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1224776 00:07:27.104 13:35:19 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:27.104 13:35:19 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:27.104 13:35:19 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1224776' 00:07:27.104 killing process with pid 1224776 00:07:27.104 13:35:19 -- common/autotest_common.sh@968 -- # kill 1224776 00:07:27.104 13:35:19 -- common/autotest_common.sh@973 -- # wait 1224776 00:07:29.003 13:35:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:29.003 13:35:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:29.003 13:35:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:29.003 13:35:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:29.003 13:35:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:29.003 13:35:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:29.003 13:35:21 -- common/autotest_common.sh@10 -- # set +x 00:07:29.003 13:35:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:29.003 13:35:21 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:29.003 13:35:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:29.003 13:35:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.003 13:35:21 -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 ************************************ 00:07:29.262 START TEST env 00:07:29.262 ************************************ 00:07:29.262 13:35:21 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:29.262 * Looking for test storage... 00:07:29.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:29.262 13:35:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:29.262 13:35:22 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:29.262 13:35:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.262 13:35:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 ************************************ 00:07:29.262 START TEST env_memory 00:07:29.262 ************************************ 00:07:29.262 13:35:22 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:29.262 00:07:29.262 00:07:29.262 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.262 http://cunit.sourceforge.net/ 00:07:29.262 00:07:29.262 00:07:29.262 Suite: memory 00:07:29.262 Test: alloc and free memory map ...[2024-06-11 13:35:22.113358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:29.262 passed 00:07:29.262 Test: mem map translation ...[2024-06-11 13:35:22.140139] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:29.262 [2024-06-11 13:35:22.140160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:29.262 [2024-06-11 13:35:22.140210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:29.262 [2024-06-11 13:35:22.140223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:29.521 passed 00:07:29.521 Test: mem map registration ...[2024-06-11 13:35:22.193256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:29.521 [2024-06-11 13:35:22.193276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:29.521 passed 00:07:29.521 Test: mem map adjacent registrations ...passed 00:07:29.521 00:07:29.521 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.521 suites 1 1 n/a 0 0 00:07:29.521 tests 4 4 4 0 0 00:07:29.521 asserts 152 152 152 0 n/a 00:07:29.521 00:07:29.521 Elapsed time = 0.184 seconds 00:07:29.521 00:07:29.521 real 0m0.198s 00:07:29.521 user 0m0.186s 00:07:29.521 sys 0m0.011s 00:07:29.521 13:35:22 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:29.521 13:35:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:29.521 ************************************ 00:07:29.521 END TEST env_memory 00:07:29.521 ************************************ 00:07:29.521 13:35:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:29.521 13:35:22 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:29.521 13:35:22 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.521 13:35:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.521 ************************************ 00:07:29.521 START TEST env_vtophys 00:07:29.521 ************************************ 00:07:29.521 13:35:22 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:29.521 EAL: lib.eal log level changed from notice to debug 00:07:29.521 EAL: Detected lcore 0 as core 0 on socket 0 00:07:29.521 EAL: Detected lcore 1 as core 1 on socket 0 00:07:29.521 EAL: Detected lcore 2 as core 2 on socket 0 00:07:29.521 EAL: Detected lcore 3 as core 3 on socket 0 00:07:29.521 EAL: Detected lcore 4 as core 4 on socket 0 00:07:29.521 EAL: Detected lcore 5 as core 5 on socket 0 00:07:29.521 EAL: Detected lcore 6 as core 6 on socket 0 00:07:29.521 EAL: Detected lcore 7 as core 8 on socket 0 00:07:29.521 EAL: Detected lcore 8 as core 9 on socket 0 00:07:29.521 EAL: Detected lcore 9 as core 10 on socket 0 00:07:29.521 EAL: Detected lcore 10 as core 11 on socket 0 00:07:29.521 EAL: Detected lcore 11 as core 12 on socket 0 00:07:29.521 EAL: Detected lcore 12 as core 13 on socket 0 00:07:29.521 EAL: Detected lcore 13 as core 14 on socket 0 00:07:29.521 EAL: Detected lcore 14 as core 16 on socket 0 00:07:29.521 EAL: Detected lcore 15 as core 17 on socket 0 00:07:29.521 EAL: Detected lcore 16 as core 18 on socket 0 00:07:29.521 EAL: Detected lcore 17 as core 19 on socket 0 00:07:29.521 EAL: Detected lcore 18 as core 20 on socket 0 00:07:29.521 EAL: Detected lcore 19 as core 21 on socket 0 00:07:29.521 EAL: Detected lcore 20 as core 22 on socket 0 00:07:29.521 EAL: Detected lcore 21 as core 24 on socket 0 00:07:29.521 EAL: Detected lcore 22 as core 25 on socket 0 00:07:29.521 EAL: Detected lcore 23 as core 26 on socket 0 00:07:29.521 EAL: Detected lcore 24 as core 27 on socket 0 00:07:29.521 EAL: Detected lcore 25 as core 28 on socket 0 00:07:29.521 EAL: Detected lcore 26 as core 29 on socket 0 00:07:29.521 EAL: Detected lcore 27 as core 30 on socket 0 00:07:29.521 EAL: Detected lcore 28 as core 0 on socket 1 00:07:29.521 EAL: Detected lcore 29 as core 1 on socket 1 00:07:29.521 EAL: Detected lcore 30 as core 2 on socket 1 00:07:29.521 EAL: Detected lcore 31 as core 3 on socket 1 00:07:29.521 EAL: Detected lcore 32 as core 4 on socket 1 00:07:29.521 EAL: Detected lcore 33 as core 5 on socket 1 00:07:29.521 EAL: Detected lcore 34 as core 6 on socket 1 00:07:29.521 EAL: Detected lcore 35 as core 8 on socket 1 00:07:29.521 EAL: Detected lcore 36 as core 9 on socket 1 00:07:29.521 EAL: Detected lcore 37 as core 10 on socket 1 00:07:29.521 EAL: Detected lcore 38 as core 11 on socket 1 00:07:29.521 EAL: Detected lcore 39 as core 12 on socket 1 00:07:29.521 EAL: Detected lcore 40 as core 13 on socket 1 00:07:29.521 EAL: Detected lcore 41 as core 14 on socket 1 00:07:29.521 EAL: Detected lcore 42 as core 16 on socket 1 00:07:29.521 EAL: Detected lcore 43 as core 17 on socket 1 00:07:29.521 EAL: Detected lcore 44 as core 18 on socket 1 00:07:29.521 EAL: Detected lcore 45 as core 19 on socket 1 00:07:29.521 EAL: Detected lcore 46 as core 20 on socket 1 00:07:29.521 EAL: Detected lcore 47 as core 21 on socket 1 00:07:29.521 EAL: Detected lcore 48 as core 22 on socket 1 00:07:29.521 EAL: Detected lcore 49 as core 24 on socket 1 00:07:29.521 EAL: Detected lcore 50 as core 25 on socket 1 00:07:29.521 EAL: Detected lcore 51 as core 26 on socket 1 00:07:29.521 EAL: Detected lcore 52 as core 27 on socket 1 00:07:29.521 EAL: Detected lcore 53 as core 28 on socket 1 00:07:29.522 EAL: Detected lcore 54 as core 29 on socket 1 00:07:29.522 EAL: Detected lcore 55 as core 30 on socket 1 00:07:29.522 EAL: Detected lcore 56 as core 0 on socket 0 00:07:29.522 EAL: Detected lcore 57 as core 1 on socket 0 00:07:29.522 EAL: Detected lcore 58 as core 2 on socket 0 00:07:29.522 EAL: Detected lcore 59 as core 3 on socket 0 00:07:29.522 EAL: Detected lcore 60 as core 4 on socket 0 00:07:29.522 EAL: Detected lcore 61 as core 5 on socket 0 00:07:29.522 EAL: Detected lcore 62 as core 6 on socket 0 00:07:29.522 EAL: Detected lcore 63 as core 8 on socket 0 00:07:29.522 EAL: Detected lcore 64 as core 9 on socket 0 00:07:29.522 EAL: Detected lcore 65 as core 10 on socket 0 00:07:29.522 EAL: Detected lcore 66 as core 11 on socket 0 00:07:29.522 EAL: Detected lcore 67 as core 12 on socket 0 00:07:29.522 EAL: Detected lcore 68 as core 13 on socket 0 00:07:29.522 EAL: Detected lcore 69 as core 14 on socket 0 00:07:29.522 EAL: Detected lcore 70 as core 16 on socket 0 00:07:29.522 EAL: Detected lcore 71 as core 17 on socket 0 00:07:29.522 EAL: Detected lcore 72 as core 18 on socket 0 00:07:29.522 EAL: Detected lcore 73 as core 19 on socket 0 00:07:29.522 EAL: Detected lcore 74 as core 20 on socket 0 00:07:29.522 EAL: Detected lcore 75 as core 21 on socket 0 00:07:29.522 EAL: Detected lcore 76 as core 22 on socket 0 00:07:29.522 EAL: Detected lcore 77 as core 24 on socket 0 00:07:29.522 EAL: Detected lcore 78 as core 25 on socket 0 00:07:29.522 EAL: Detected lcore 79 as core 26 on socket 0 00:07:29.522 EAL: Detected lcore 80 as core 27 on socket 0 00:07:29.522 EAL: Detected lcore 81 as core 28 on socket 0 00:07:29.522 EAL: Detected lcore 82 as core 29 on socket 0 00:07:29.522 EAL: Detected lcore 83 as core 30 on socket 0 00:07:29.522 EAL: Detected lcore 84 as core 0 on socket 1 00:07:29.522 EAL: Detected lcore 85 as core 1 on socket 1 00:07:29.522 EAL: Detected lcore 86 as core 2 on socket 1 00:07:29.522 EAL: Detected lcore 87 as core 3 on socket 1 00:07:29.522 EAL: Detected lcore 88 as core 4 on socket 1 00:07:29.522 EAL: Detected lcore 89 as core 5 on socket 1 00:07:29.522 EAL: Detected lcore 90 as core 6 on socket 1 00:07:29.522 EAL: Detected lcore 91 as core 8 on socket 1 00:07:29.522 EAL: Detected lcore 92 as core 9 on socket 1 00:07:29.522 EAL: Detected lcore 93 as core 10 on socket 1 00:07:29.522 EAL: Detected lcore 94 as core 11 on socket 1 00:07:29.522 EAL: Detected lcore 95 as core 12 on socket 1 00:07:29.522 EAL: Detected lcore 96 as core 13 on socket 1 00:07:29.522 EAL: Detected lcore 97 as core 14 on socket 1 00:07:29.522 EAL: Detected lcore 98 as core 16 on socket 1 00:07:29.522 EAL: Detected lcore 99 as core 17 on socket 1 00:07:29.522 EAL: Detected lcore 100 as core 18 on socket 1 00:07:29.522 EAL: Detected lcore 101 as core 19 on socket 1 00:07:29.522 EAL: Detected lcore 102 as core 20 on socket 1 00:07:29.522 EAL: Detected lcore 103 as core 21 on socket 1 00:07:29.522 EAL: Detected lcore 104 as core 22 on socket 1 00:07:29.522 EAL: Detected lcore 105 as core 24 on socket 1 00:07:29.522 EAL: Detected lcore 106 as core 25 on socket 1 00:07:29.522 EAL: Detected lcore 107 as core 26 on socket 1 00:07:29.522 EAL: Detected lcore 108 as core 27 on socket 1 00:07:29.522 EAL: Detected lcore 109 as core 28 on socket 1 00:07:29.522 EAL: Detected lcore 110 as core 29 on socket 1 00:07:29.522 EAL: Detected lcore 111 as core 30 on socket 1 00:07:29.522 EAL: Maximum logical cores by configuration: 128 00:07:29.522 EAL: Detected CPU lcores: 112 00:07:29.522 EAL: Detected NUMA nodes: 2 00:07:29.522 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:29.522 EAL: Detected shared linkage of DPDK 00:07:29.522 EAL: No shared files mode enabled, IPC will be disabled 00:07:29.522 EAL: Bus pci wants IOVA as 'DC' 00:07:29.522 EAL: Buses did not request a specific IOVA mode. 00:07:29.522 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:29.522 EAL: Selected IOVA mode 'VA' 00:07:29.522 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.522 EAL: Probing VFIO support... 00:07:29.522 EAL: IOMMU type 1 (Type 1) is supported 00:07:29.522 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:29.522 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:29.522 EAL: VFIO support initialized 00:07:29.522 EAL: Ask a virtual area of 0x2e000 bytes 00:07:29.522 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:29.522 EAL: Setting up physically contiguous memory... 00:07:29.522 EAL: Setting maximum number of open files to 524288 00:07:29.522 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:29.522 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:29.522 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:29.522 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:29.522 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.522 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:29.522 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:29.522 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.522 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:29.522 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:29.522 EAL: Hugepages will be freed exactly as allocated. 00:07:29.522 EAL: No shared files mode enabled, IPC is disabled 00:07:29.522 EAL: No shared files mode enabled, IPC is disabled 00:07:29.522 EAL: TSC frequency is ~2500000 KHz 00:07:29.522 EAL: Main lcore 0 is ready (tid=7fefbf86da00;cpuset=[0]) 00:07:29.522 EAL: Trying to obtain current memory policy. 00:07:29.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.522 EAL: Restoring previous memory policy: 0 00:07:29.522 EAL: request: mp_malloc_sync 00:07:29.522 EAL: No shared files mode enabled, IPC is disabled 00:07:29.522 EAL: Heap on socket 0 was expanded by 2MB 00:07:29.522 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:29.781 EAL: Mem event callback 'spdk:(nil)' registered 00:07:29.781 00:07:29.781 00:07:29.781 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.781 http://cunit.sourceforge.net/ 00:07:29.781 00:07:29.781 00:07:29.781 Suite: components_suite 00:07:29.781 Test: vtophys_malloc_test ...passed 00:07:29.781 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 4MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 4MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 6MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 6MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 10MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 10MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 18MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 18MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 34MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 34MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 66MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 66MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 130MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was shrunk by 130MB 00:07:29.781 EAL: Trying to obtain current memory policy. 00:07:29.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.781 EAL: Restoring previous memory policy: 4 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.781 EAL: request: mp_malloc_sync 00:07:29.781 EAL: No shared files mode enabled, IPC is disabled 00:07:29.781 EAL: Heap on socket 0 was expanded by 258MB 00:07:29.781 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.040 EAL: request: mp_malloc_sync 00:07:30.040 EAL: No shared files mode enabled, IPC is disabled 00:07:30.040 EAL: Heap on socket 0 was shrunk by 258MB 00:07:30.040 EAL: Trying to obtain current memory policy. 00:07:30.040 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.040 EAL: Restoring previous memory policy: 4 00:07:30.040 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.040 EAL: request: mp_malloc_sync 00:07:30.040 EAL: No shared files mode enabled, IPC is disabled 00:07:30.040 EAL: Heap on socket 0 was expanded by 514MB 00:07:30.040 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.298 EAL: request: mp_malloc_sync 00:07:30.298 EAL: No shared files mode enabled, IPC is disabled 00:07:30.298 EAL: Heap on socket 0 was shrunk by 514MB 00:07:30.298 EAL: Trying to obtain current memory policy. 00:07:30.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.298 EAL: Restoring previous memory policy: 4 00:07:30.298 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.298 EAL: request: mp_malloc_sync 00:07:30.298 EAL: No shared files mode enabled, IPC is disabled 00:07:30.298 EAL: Heap on socket 0 was expanded by 1026MB 00:07:30.556 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.814 EAL: request: mp_malloc_sync 00:07:30.814 EAL: No shared files mode enabled, IPC is disabled 00:07:30.814 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:30.814 passed 00:07:30.814 00:07:30.814 Run Summary: Type Total Ran Passed Failed Inactive 00:07:30.814 suites 1 1 n/a 0 0 00:07:30.814 tests 2 2 2 0 0 00:07:30.814 asserts 497 497 497 0 n/a 00:07:30.814 00:07:30.814 Elapsed time = 1.012 seconds 00:07:30.814 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.814 EAL: request: mp_malloc_sync 00:07:30.814 EAL: No shared files mode enabled, IPC is disabled 00:07:30.814 EAL: Heap on socket 0 was shrunk by 2MB 00:07:30.814 EAL: No shared files mode enabled, IPC is disabled 00:07:30.814 EAL: No shared files mode enabled, IPC is disabled 00:07:30.814 EAL: No shared files mode enabled, IPC is disabled 00:07:30.814 00:07:30.814 real 0m1.175s 00:07:30.814 user 0m0.669s 00:07:30.814 sys 0m0.472s 00:07:30.814 13:35:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.814 13:35:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 END TEST env_vtophys 00:07:30.814 ************************************ 00:07:30.814 13:35:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:30.814 13:35:23 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:30.814 13:35:23 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.814 13:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 START TEST env_pci 00:07:30.814 ************************************ 00:07:30.814 13:35:23 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:30.814 00:07:30.814 00:07:30.814 CUnit - A unit testing framework for C - Version 2.1-3 00:07:30.814 http://cunit.sourceforge.net/ 00:07:30.814 00:07:30.814 00:07:30.814 Suite: pci 00:07:30.814 Test: pci_hook ...[2024-06-11 13:35:23.617976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1226324 has claimed it 00:07:30.814 EAL: Cannot find device (10000:00:01.0) 00:07:30.814 EAL: Failed to attach device on primary process 00:07:30.814 passed 00:07:30.814 00:07:30.814 Run Summary: Type Total Ran Passed Failed Inactive 00:07:30.814 suites 1 1 n/a 0 0 00:07:30.814 tests 1 1 1 0 0 00:07:30.814 asserts 25 25 25 0 n/a 00:07:30.814 00:07:30.814 Elapsed time = 0.039 seconds 00:07:30.814 00:07:30.814 real 0m0.061s 00:07:30.814 user 0m0.023s 00:07:30.814 sys 0m0.038s 00:07:30.814 13:35:23 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.814 13:35:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:30.814 ************************************ 00:07:30.814 END TEST env_pci 00:07:30.814 ************************************ 00:07:30.814 13:35:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:30.814 13:35:23 env -- env/env.sh@15 -- # uname 00:07:30.814 13:35:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:30.814 13:35:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:30.814 13:35:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:30.814 13:35:23 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:30.814 13:35:23 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.814 13:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 ************************************ 00:07:31.073 START TEST env_dpdk_post_init 00:07:31.073 ************************************ 00:07:31.073 13:35:23 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:31.073 EAL: Detected CPU lcores: 112 00:07:31.073 EAL: Detected NUMA nodes: 2 00:07:31.073 EAL: Detected shared linkage of DPDK 00:07:31.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:31.073 EAL: Selected IOVA mode 'VA' 00:07:31.073 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.073 EAL: VFIO support initialized 00:07:31.073 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:31.073 EAL: Using IOMMU type 1 (Type 1) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:31.073 EAL: Ignore mapping IO port bar(1) 00:07:31.073 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:31.331 EAL: Ignore mapping IO port bar(1) 00:07:31.331 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:32.267 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:07:35.548 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:07:35.548 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:07:35.806 Starting DPDK initialization... 00:07:35.806 Starting SPDK post initialization... 00:07:35.806 SPDK NVMe probe 00:07:35.806 Attaching to 0000:d8:00.0 00:07:35.806 Attached to 0000:d8:00.0 00:07:35.806 Cleaning up... 00:07:35.806 00:07:35.806 real 0m4.923s 00:07:35.806 user 0m3.584s 00:07:35.806 sys 0m0.389s 00:07:35.806 13:35:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.806 13:35:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:35.806 ************************************ 00:07:35.806 END TEST env_dpdk_post_init 00:07:35.806 ************************************ 00:07:35.806 13:35:28 env -- env/env.sh@26 -- # uname 00:07:35.806 13:35:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:35.806 13:35:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:35.806 13:35:28 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:35.806 13:35:28 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.806 13:35:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.065 ************************************ 00:07:36.065 START TEST env_mem_callbacks 00:07:36.065 ************************************ 00:07:36.065 13:35:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:36.065 EAL: Detected CPU lcores: 112 00:07:36.065 EAL: Detected NUMA nodes: 2 00:07:36.065 EAL: Detected shared linkage of DPDK 00:07:36.065 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:36.065 EAL: Selected IOVA mode 'VA' 00:07:36.065 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.065 EAL: VFIO support initialized 00:07:36.065 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:36.065 00:07:36.065 00:07:36.065 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.065 http://cunit.sourceforge.net/ 00:07:36.065 00:07:36.065 00:07:36.065 Suite: memory 00:07:36.065 Test: test ... 00:07:36.065 register 0x200000200000 2097152 00:07:36.065 malloc 3145728 00:07:36.065 register 0x200000400000 4194304 00:07:36.065 buf 0x200000500000 len 3145728 PASSED 00:07:36.065 malloc 64 00:07:36.065 buf 0x2000004fff40 len 64 PASSED 00:07:36.065 malloc 4194304 00:07:36.065 register 0x200000800000 6291456 00:07:36.065 buf 0x200000a00000 len 4194304 PASSED 00:07:36.065 free 0x200000500000 3145728 00:07:36.065 free 0x2000004fff40 64 00:07:36.065 unregister 0x200000400000 4194304 PASSED 00:07:36.065 free 0x200000a00000 4194304 00:07:36.065 unregister 0x200000800000 6291456 PASSED 00:07:36.065 malloc 8388608 00:07:36.065 register 0x200000400000 10485760 00:07:36.065 buf 0x200000600000 len 8388608 PASSED 00:07:36.065 free 0x200000600000 8388608 00:07:36.066 unregister 0x200000400000 10485760 PASSED 00:07:36.066 passed 00:07:36.066 00:07:36.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.066 suites 1 1 n/a 0 0 00:07:36.066 tests 1 1 1 0 0 00:07:36.066 asserts 15 15 15 0 n/a 00:07:36.066 00:07:36.066 Elapsed time = 0.008 seconds 00:07:36.066 00:07:36.066 real 0m0.080s 00:07:36.066 user 0m0.020s 00:07:36.066 sys 0m0.059s 00:07:36.066 13:35:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.066 13:35:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:36.066 ************************************ 00:07:36.066 END TEST env_mem_callbacks 00:07:36.066 ************************************ 00:07:36.066 00:07:36.066 real 0m6.946s 00:07:36.066 user 0m4.659s 00:07:36.066 sys 0m1.344s 00:07:36.066 13:35:28 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.066 13:35:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.066 ************************************ 00:07:36.066 END TEST env 00:07:36.066 ************************************ 00:07:36.066 13:35:28 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:36.066 13:35:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:36.066 13:35:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.066 13:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:36.066 ************************************ 00:07:36.066 START TEST rpc 00:07:36.066 ************************************ 00:07:36.066 13:35:28 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:36.325 * Looking for test storage... 00:07:36.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:36.325 13:35:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1227265 00:07:36.325 13:35:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.325 13:35:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:36.325 13:35:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1227265 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@830 -- # '[' -z 1227265 ']' 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:36.325 13:35:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.325 [2024-06-11 13:35:29.122590] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:36.325 [2024-06-11 13:35:29.122665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227265 ] 00:07:36.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.325 [2024-06-11 13:35:29.224421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.583 [2024-06-11 13:35:29.312349] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:36.583 [2024-06-11 13:35:29.312394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1227265' to capture a snapshot of events at runtime. 00:07:36.583 [2024-06-11 13:35:29.312407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.583 [2024-06-11 13:35:29.312419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.583 [2024-06-11 13:35:29.312429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1227265 for offline analysis/debug. 00:07:36.583 [2024-06-11 13:35:29.312457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.150 13:35:30 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:37.150 13:35:30 rpc -- common/autotest_common.sh@863 -- # return 0 00:07:37.150 13:35:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:37.150 13:35:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:37.150 13:35:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:37.150 13:35:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:37.150 13:35:30 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:37.150 13:35:30 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.150 13:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 ************************************ 00:07:37.407 START TEST rpc_integrity 00:07:37.407 ************************************ 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.407 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.407 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:37.407 { 00:07:37.407 "name": "Malloc0", 00:07:37.407 "aliases": [ 00:07:37.407 "b3fb3435-5027-4ef7-af94-1eb888ede935" 00:07:37.407 ], 00:07:37.407 "product_name": "Malloc disk", 00:07:37.407 "block_size": 512, 00:07:37.407 "num_blocks": 16384, 00:07:37.407 "uuid": "b3fb3435-5027-4ef7-af94-1eb888ede935", 00:07:37.407 "assigned_rate_limits": { 00:07:37.407 "rw_ios_per_sec": 0, 00:07:37.407 "rw_mbytes_per_sec": 0, 00:07:37.407 "r_mbytes_per_sec": 0, 00:07:37.408 "w_mbytes_per_sec": 0 00:07:37.408 }, 00:07:37.408 "claimed": false, 00:07:37.408 "zoned": false, 00:07:37.408 "supported_io_types": { 00:07:37.408 "read": true, 00:07:37.408 "write": true, 00:07:37.408 "unmap": true, 00:07:37.408 "write_zeroes": true, 00:07:37.408 "flush": true, 00:07:37.408 "reset": true, 00:07:37.408 "compare": false, 00:07:37.408 "compare_and_write": false, 00:07:37.408 "abort": true, 00:07:37.408 "nvme_admin": false, 00:07:37.408 "nvme_io": false 00:07:37.408 }, 00:07:37.408 "memory_domains": [ 00:07:37.408 { 00:07:37.408 "dma_device_id": "system", 00:07:37.408 "dma_device_type": 1 00:07:37.408 }, 00:07:37.408 { 00:07:37.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.408 "dma_device_type": 2 00:07:37.408 } 00:07:37.408 ], 00:07:37.408 "driver_specific": {} 00:07:37.408 } 00:07:37.408 ]' 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 [2024-06-11 13:35:30.215380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:37.408 [2024-06-11 13:35:30.215416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.408 [2024-06-11 13:35:30.215433] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1ed80 00:07:37.408 [2024-06-11 13:35:30.215444] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.408 [2024-06-11 13:35:30.216797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.408 [2024-06-11 13:35:30.216826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:37.408 Passthru0 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:37.408 { 00:07:37.408 "name": "Malloc0", 00:07:37.408 "aliases": [ 00:07:37.408 "b3fb3435-5027-4ef7-af94-1eb888ede935" 00:07:37.408 ], 00:07:37.408 "product_name": "Malloc disk", 00:07:37.408 "block_size": 512, 00:07:37.408 "num_blocks": 16384, 00:07:37.408 "uuid": "b3fb3435-5027-4ef7-af94-1eb888ede935", 00:07:37.408 "assigned_rate_limits": { 00:07:37.408 "rw_ios_per_sec": 0, 00:07:37.408 "rw_mbytes_per_sec": 0, 00:07:37.408 "r_mbytes_per_sec": 0, 00:07:37.408 "w_mbytes_per_sec": 0 00:07:37.408 }, 00:07:37.408 "claimed": true, 00:07:37.408 "claim_type": "exclusive_write", 00:07:37.408 "zoned": false, 00:07:37.408 "supported_io_types": { 00:07:37.408 "read": true, 00:07:37.408 "write": true, 00:07:37.408 "unmap": true, 00:07:37.408 "write_zeroes": true, 00:07:37.408 "flush": true, 00:07:37.408 "reset": true, 00:07:37.408 "compare": false, 00:07:37.408 "compare_and_write": false, 00:07:37.408 "abort": true, 00:07:37.408 "nvme_admin": false, 00:07:37.408 "nvme_io": false 00:07:37.408 }, 00:07:37.408 "memory_domains": [ 00:07:37.408 { 00:07:37.408 "dma_device_id": "system", 00:07:37.408 "dma_device_type": 1 00:07:37.408 }, 00:07:37.408 { 00:07:37.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.408 "dma_device_type": 2 00:07:37.408 } 00:07:37.408 ], 00:07:37.408 "driver_specific": {} 00:07:37.408 }, 00:07:37.408 { 00:07:37.408 "name": "Passthru0", 00:07:37.408 "aliases": [ 00:07:37.408 "34502be6-0c0b-5a86-ba8c-91c347b86133" 00:07:37.408 ], 00:07:37.408 "product_name": "passthru", 00:07:37.408 "block_size": 512, 00:07:37.408 "num_blocks": 16384, 00:07:37.408 "uuid": "34502be6-0c0b-5a86-ba8c-91c347b86133", 00:07:37.408 "assigned_rate_limits": { 00:07:37.408 "rw_ios_per_sec": 0, 00:07:37.408 "rw_mbytes_per_sec": 0, 00:07:37.408 "r_mbytes_per_sec": 0, 00:07:37.408 "w_mbytes_per_sec": 0 00:07:37.408 }, 00:07:37.408 "claimed": false, 00:07:37.408 "zoned": false, 00:07:37.408 "supported_io_types": { 00:07:37.408 "read": true, 00:07:37.408 "write": true, 00:07:37.408 "unmap": true, 00:07:37.408 "write_zeroes": true, 00:07:37.408 "flush": true, 00:07:37.408 "reset": true, 00:07:37.408 "compare": false, 00:07:37.408 "compare_and_write": false, 00:07:37.408 "abort": true, 00:07:37.408 "nvme_admin": false, 00:07:37.408 "nvme_io": false 00:07:37.408 }, 00:07:37.408 "memory_domains": [ 00:07:37.408 { 00:07:37.408 "dma_device_id": "system", 00:07:37.408 "dma_device_type": 1 00:07:37.408 }, 00:07:37.408 { 00:07:37.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.408 "dma_device_type": 2 00:07:37.408 } 00:07:37.408 ], 00:07:37.408 "driver_specific": { 00:07:37.408 "passthru": { 00:07:37.408 "name": "Passthru0", 00:07:37.408 "base_bdev_name": "Malloc0" 00:07:37.408 } 00:07:37.408 } 00:07:37.408 } 00:07:37.408 ]' 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.408 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.408 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.666 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:37.666 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:37.666 13:35:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:37.666 00:07:37.666 real 0m0.298s 00:07:37.666 user 0m0.190s 00:07:37.666 sys 0m0.045s 00:07:37.666 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.666 13:35:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 ************************************ 00:07:37.666 END TEST rpc_integrity 00:07:37.666 ************************************ 00:07:37.666 13:35:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:37.666 13:35:30 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:37.666 13:35:30 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.666 13:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 ************************************ 00:07:37.666 START TEST rpc_plugins 00:07:37.666 ************************************ 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:37.666 { 00:07:37.666 "name": "Malloc1", 00:07:37.666 "aliases": [ 00:07:37.666 "d2695417-7c7b-414d-a619-f79a011c84b8" 00:07:37.666 ], 00:07:37.666 "product_name": "Malloc disk", 00:07:37.666 "block_size": 4096, 00:07:37.666 "num_blocks": 256, 00:07:37.666 "uuid": "d2695417-7c7b-414d-a619-f79a011c84b8", 00:07:37.666 "assigned_rate_limits": { 00:07:37.666 "rw_ios_per_sec": 0, 00:07:37.666 "rw_mbytes_per_sec": 0, 00:07:37.666 "r_mbytes_per_sec": 0, 00:07:37.666 "w_mbytes_per_sec": 0 00:07:37.666 }, 00:07:37.666 "claimed": false, 00:07:37.666 "zoned": false, 00:07:37.666 "supported_io_types": { 00:07:37.666 "read": true, 00:07:37.666 "write": true, 00:07:37.666 "unmap": true, 00:07:37.666 "write_zeroes": true, 00:07:37.666 "flush": true, 00:07:37.666 "reset": true, 00:07:37.666 "compare": false, 00:07:37.666 "compare_and_write": false, 00:07:37.666 "abort": true, 00:07:37.666 "nvme_admin": false, 00:07:37.666 "nvme_io": false 00:07:37.666 }, 00:07:37.666 "memory_domains": [ 00:07:37.666 { 00:07:37.666 "dma_device_id": "system", 00:07:37.666 "dma_device_type": 1 00:07:37.666 }, 00:07:37.666 { 00:07:37.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.666 "dma_device_type": 2 00:07:37.666 } 00:07:37.666 ], 00:07:37.666 "driver_specific": {} 00:07:37.666 } 00:07:37.666 ]' 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:37.666 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:37.666 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:37.924 13:35:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:37.924 00:07:37.924 real 0m0.151s 00:07:37.924 user 0m0.087s 00:07:37.924 sys 0m0.028s 00:07:37.924 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.924 13:35:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:37.924 ************************************ 00:07:37.924 END TEST rpc_plugins 00:07:37.924 ************************************ 00:07:37.924 13:35:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:37.924 13:35:30 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:37.924 13:35:30 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.924 13:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.924 ************************************ 00:07:37.924 START TEST rpc_trace_cmd_test 00:07:37.924 ************************************ 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:37.924 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:37.924 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1227265", 00:07:37.924 "tpoint_group_mask": "0x8", 00:07:37.924 "iscsi_conn": { 00:07:37.924 "mask": "0x2", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "scsi": { 00:07:37.924 "mask": "0x4", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "bdev": { 00:07:37.924 "mask": "0x8", 00:07:37.924 "tpoint_mask": "0xffffffffffffffff" 00:07:37.924 }, 00:07:37.924 "nvmf_rdma": { 00:07:37.924 "mask": "0x10", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "nvmf_tcp": { 00:07:37.924 "mask": "0x20", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "ftl": { 00:07:37.924 "mask": "0x40", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "blobfs": { 00:07:37.924 "mask": "0x80", 00:07:37.924 "tpoint_mask": "0x0" 00:07:37.924 }, 00:07:37.924 "dsa": { 00:07:37.924 "mask": "0x200", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "thread": { 00:07:37.925 "mask": "0x400", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "nvme_pcie": { 00:07:37.925 "mask": "0x800", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "iaa": { 00:07:37.925 "mask": "0x1000", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "nvme_tcp": { 00:07:37.925 "mask": "0x2000", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "bdev_nvme": { 00:07:37.925 "mask": "0x4000", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 }, 00:07:37.925 "sock": { 00:07:37.925 "mask": "0x8000", 00:07:37.925 "tpoint_mask": "0x0" 00:07:37.925 } 00:07:37.925 }' 00:07:37.925 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:37.925 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:37.925 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:37.925 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:37.925 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:38.183 00:07:38.183 real 0m0.245s 00:07:38.183 user 0m0.199s 00:07:38.183 sys 0m0.037s 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.183 13:35:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.183 ************************************ 00:07:38.183 END TEST rpc_trace_cmd_test 00:07:38.183 ************************************ 00:07:38.183 13:35:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:38.183 13:35:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:38.183 13:35:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:38.183 13:35:30 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:38.183 13:35:30 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.183 13:35:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.183 ************************************ 00:07:38.183 START TEST rpc_daemon_integrity 00:07:38.183 ************************************ 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:38.183 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.184 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:38.442 { 00:07:38.442 "name": "Malloc2", 00:07:38.442 "aliases": [ 00:07:38.442 "3a709cbe-c4ca-4b18-a700-ca1b074f4bde" 00:07:38.442 ], 00:07:38.442 "product_name": "Malloc disk", 00:07:38.442 "block_size": 512, 00:07:38.442 "num_blocks": 16384, 00:07:38.442 "uuid": "3a709cbe-c4ca-4b18-a700-ca1b074f4bde", 00:07:38.442 "assigned_rate_limits": { 00:07:38.442 "rw_ios_per_sec": 0, 00:07:38.442 "rw_mbytes_per_sec": 0, 00:07:38.442 "r_mbytes_per_sec": 0, 00:07:38.442 "w_mbytes_per_sec": 0 00:07:38.442 }, 00:07:38.442 "claimed": false, 00:07:38.442 "zoned": false, 00:07:38.442 "supported_io_types": { 00:07:38.442 "read": true, 00:07:38.442 "write": true, 00:07:38.442 "unmap": true, 00:07:38.442 "write_zeroes": true, 00:07:38.442 "flush": true, 00:07:38.442 "reset": true, 00:07:38.442 "compare": false, 00:07:38.442 "compare_and_write": false, 00:07:38.442 "abort": true, 00:07:38.442 "nvme_admin": false, 00:07:38.442 "nvme_io": false 00:07:38.442 }, 00:07:38.442 "memory_domains": [ 00:07:38.442 { 00:07:38.442 "dma_device_id": "system", 00:07:38.442 "dma_device_type": 1 00:07:38.442 }, 00:07:38.442 { 00:07:38.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.442 "dma_device_type": 2 00:07:38.442 } 00:07:38.442 ], 00:07:38.442 "driver_specific": {} 00:07:38.442 } 00:07:38.442 ]' 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.442 [2024-06-11 13:35:31.150001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:38.442 [2024-06-11 13:35:31.150036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.442 [2024-06-11 13:35:31.150055] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1e880 00:07:38.442 [2024-06-11 13:35:31.150067] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.442 [2024-06-11 13:35:31.151303] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.442 [2024-06-11 13:35:31.151329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:38.442 Passthru0 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.442 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:38.443 { 00:07:38.443 "name": "Malloc2", 00:07:38.443 "aliases": [ 00:07:38.443 "3a709cbe-c4ca-4b18-a700-ca1b074f4bde" 00:07:38.443 ], 00:07:38.443 "product_name": "Malloc disk", 00:07:38.443 "block_size": 512, 00:07:38.443 "num_blocks": 16384, 00:07:38.443 "uuid": "3a709cbe-c4ca-4b18-a700-ca1b074f4bde", 00:07:38.443 "assigned_rate_limits": { 00:07:38.443 "rw_ios_per_sec": 0, 00:07:38.443 "rw_mbytes_per_sec": 0, 00:07:38.443 "r_mbytes_per_sec": 0, 00:07:38.443 "w_mbytes_per_sec": 0 00:07:38.443 }, 00:07:38.443 "claimed": true, 00:07:38.443 "claim_type": "exclusive_write", 00:07:38.443 "zoned": false, 00:07:38.443 "supported_io_types": { 00:07:38.443 "read": true, 00:07:38.443 "write": true, 00:07:38.443 "unmap": true, 00:07:38.443 "write_zeroes": true, 00:07:38.443 "flush": true, 00:07:38.443 "reset": true, 00:07:38.443 "compare": false, 00:07:38.443 "compare_and_write": false, 00:07:38.443 "abort": true, 00:07:38.443 "nvme_admin": false, 00:07:38.443 "nvme_io": false 00:07:38.443 }, 00:07:38.443 "memory_domains": [ 00:07:38.443 { 00:07:38.443 "dma_device_id": "system", 00:07:38.443 "dma_device_type": 1 00:07:38.443 }, 00:07:38.443 { 00:07:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.443 "dma_device_type": 2 00:07:38.443 } 00:07:38.443 ], 00:07:38.443 "driver_specific": {} 00:07:38.443 }, 00:07:38.443 { 00:07:38.443 "name": "Passthru0", 00:07:38.443 "aliases": [ 00:07:38.443 "959214c5-a58e-53d2-9cd8-6dad929825af" 00:07:38.443 ], 00:07:38.443 "product_name": "passthru", 00:07:38.443 "block_size": 512, 00:07:38.443 "num_blocks": 16384, 00:07:38.443 "uuid": "959214c5-a58e-53d2-9cd8-6dad929825af", 00:07:38.443 "assigned_rate_limits": { 00:07:38.443 "rw_ios_per_sec": 0, 00:07:38.443 "rw_mbytes_per_sec": 0, 00:07:38.443 "r_mbytes_per_sec": 0, 00:07:38.443 "w_mbytes_per_sec": 0 00:07:38.443 }, 00:07:38.443 "claimed": false, 00:07:38.443 "zoned": false, 00:07:38.443 "supported_io_types": { 00:07:38.443 "read": true, 00:07:38.443 "write": true, 00:07:38.443 "unmap": true, 00:07:38.443 "write_zeroes": true, 00:07:38.443 "flush": true, 00:07:38.443 "reset": true, 00:07:38.443 "compare": false, 00:07:38.443 "compare_and_write": false, 00:07:38.443 "abort": true, 00:07:38.443 "nvme_admin": false, 00:07:38.443 "nvme_io": false 00:07:38.443 }, 00:07:38.443 "memory_domains": [ 00:07:38.443 { 00:07:38.443 "dma_device_id": "system", 00:07:38.443 "dma_device_type": 1 00:07:38.443 }, 00:07:38.443 { 00:07:38.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.443 "dma_device_type": 2 00:07:38.443 } 00:07:38.443 ], 00:07:38.443 "driver_specific": { 00:07:38.443 "passthru": { 00:07:38.443 "name": "Passthru0", 00:07:38.443 "base_bdev_name": "Malloc2" 00:07:38.443 } 00:07:38.443 } 00:07:38.443 } 00:07:38.443 ]' 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:38.443 00:07:38.443 real 0m0.285s 00:07:38.443 user 0m0.170s 00:07:38.443 sys 0m0.054s 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.443 13:35:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:38.443 ************************************ 00:07:38.443 END TEST rpc_daemon_integrity 00:07:38.443 ************************************ 00:07:38.443 13:35:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:38.443 13:35:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1227265 00:07:38.443 13:35:31 rpc -- common/autotest_common.sh@949 -- # '[' -z 1227265 ']' 00:07:38.443 13:35:31 rpc -- common/autotest_common.sh@953 -- # kill -0 1227265 00:07:38.443 13:35:31 rpc -- common/autotest_common.sh@954 -- # uname 00:07:38.443 13:35:31 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:38.443 13:35:31 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1227265 00:07:38.701 13:35:31 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:38.701 13:35:31 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:38.701 13:35:31 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1227265' 00:07:38.701 killing process with pid 1227265 00:07:38.701 13:35:31 rpc -- common/autotest_common.sh@968 -- # kill 1227265 00:07:38.701 13:35:31 rpc -- common/autotest_common.sh@973 -- # wait 1227265 00:07:38.959 00:07:38.959 real 0m2.771s 00:07:38.959 user 0m3.529s 00:07:38.959 sys 0m0.904s 00:07:38.959 13:35:31 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.959 13:35:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.959 ************************************ 00:07:38.959 END TEST rpc 00:07:38.960 ************************************ 00:07:38.960 13:35:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:38.960 13:35:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:38.960 13:35:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.960 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.960 ************************************ 00:07:38.960 START TEST skip_rpc 00:07:38.960 ************************************ 00:07:38.960 13:35:31 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:39.217 * Looking for test storage... 00:07:39.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:39.217 13:35:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:39.217 13:35:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:39.217 13:35:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:39.217 13:35:31 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:39.217 13:35:31 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.217 13:35:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.217 ************************************ 00:07:39.217 START TEST skip_rpc 00:07:39.217 ************************************ 00:07:39.217 13:35:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:07:39.217 13:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1227958 00:07:39.217 13:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.217 13:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:39.217 13:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:39.217 [2024-06-11 13:35:32.005833] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:39.217 [2024-06-11 13:35:32.005888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227958 ] 00:07:39.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.217 [2024-06-11 13:35:32.106630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.475 [2024-06-11 13:35:32.189667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1227958 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1227958 ']' 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1227958 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:44.738 13:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1227958 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1227958' 00:07:44.738 killing process with pid 1227958 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1227958 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1227958 00:07:44.738 00:07:44.738 real 0m5.392s 00:07:44.738 user 0m5.119s 00:07:44.738 sys 0m0.306s 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.738 13:35:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.738 ************************************ 00:07:44.738 END TEST skip_rpc 00:07:44.738 ************************************ 00:07:44.738 13:35:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:44.738 13:35:37 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:44.738 13:35:37 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.738 13:35:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.738 ************************************ 00:07:44.738 START TEST skip_rpc_with_json 00:07:44.738 ************************************ 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1229047 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1229047 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1229047 ']' 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:44.738 13:35:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:44.738 [2024-06-11 13:35:37.478309] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:44.738 [2024-06-11 13:35:37.478365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229047 ] 00:07:44.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.738 [2024-06-11 13:35:37.580626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.996 [2024-06-11 13:35:37.667984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.561 [2024-06-11 13:35:38.370054] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:45.561 request: 00:07:45.561 { 00:07:45.561 "trtype": "tcp", 00:07:45.561 "method": "nvmf_get_transports", 00:07:45.561 "req_id": 1 00:07:45.561 } 00:07:45.561 Got JSON-RPC error response 00:07:45.561 response: 00:07:45.561 { 00:07:45.561 "code": -19, 00:07:45.561 "message": "No such device" 00:07:45.561 } 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.561 [2024-06-11 13:35:38.378159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:45.561 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.818 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:45.818 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:45.818 { 00:07:45.818 "subsystems": [ 00:07:45.818 { 00:07:45.818 "subsystem": "keyring", 00:07:45.818 "config": [] 00:07:45.818 }, 00:07:45.818 { 00:07:45.818 "subsystem": "iobuf", 00:07:45.818 "config": [ 00:07:45.818 { 00:07:45.818 "method": "iobuf_set_options", 00:07:45.818 "params": { 00:07:45.818 "small_pool_count": 8192, 00:07:45.818 "large_pool_count": 1024, 00:07:45.818 "small_bufsize": 8192, 00:07:45.818 "large_bufsize": 135168 00:07:45.818 } 00:07:45.818 } 00:07:45.818 ] 00:07:45.818 }, 00:07:45.818 { 00:07:45.818 "subsystem": "sock", 00:07:45.818 "config": [ 00:07:45.818 { 00:07:45.818 "method": "sock_set_default_impl", 00:07:45.818 "params": { 00:07:45.818 "impl_name": "posix" 00:07:45.818 } 00:07:45.818 }, 00:07:45.818 { 00:07:45.818 "method": "sock_impl_set_options", 00:07:45.818 "params": { 00:07:45.818 "impl_name": "ssl", 00:07:45.818 "recv_buf_size": 4096, 00:07:45.818 "send_buf_size": 4096, 00:07:45.818 "enable_recv_pipe": true, 00:07:45.818 "enable_quickack": false, 00:07:45.818 "enable_placement_id": 0, 00:07:45.819 "enable_zerocopy_send_server": true, 00:07:45.819 "enable_zerocopy_send_client": false, 00:07:45.819 "zerocopy_threshold": 0, 00:07:45.819 "tls_version": 0, 00:07:45.819 "enable_ktls": false 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "sock_impl_set_options", 00:07:45.819 "params": { 00:07:45.819 "impl_name": "posix", 00:07:45.819 "recv_buf_size": 2097152, 00:07:45.819 "send_buf_size": 2097152, 00:07:45.819 "enable_recv_pipe": true, 00:07:45.819 "enable_quickack": false, 00:07:45.819 "enable_placement_id": 0, 00:07:45.819 "enable_zerocopy_send_server": true, 00:07:45.819 "enable_zerocopy_send_client": false, 00:07:45.819 "zerocopy_threshold": 0, 00:07:45.819 "tls_version": 0, 00:07:45.819 "enable_ktls": false 00:07:45.819 } 00:07:45.819 } 00:07:45.819 ] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "vmd", 00:07:45.819 "config": [] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "accel", 00:07:45.819 "config": [ 00:07:45.819 { 00:07:45.819 "method": "accel_set_options", 00:07:45.819 "params": { 00:07:45.819 "small_cache_size": 128, 00:07:45.819 "large_cache_size": 16, 00:07:45.819 "task_count": 2048, 00:07:45.819 "sequence_count": 2048, 00:07:45.819 "buf_count": 2048 00:07:45.819 } 00:07:45.819 } 00:07:45.819 ] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "bdev", 00:07:45.819 "config": [ 00:07:45.819 { 00:07:45.819 "method": "bdev_set_options", 00:07:45.819 "params": { 00:07:45.819 "bdev_io_pool_size": 65535, 00:07:45.819 "bdev_io_cache_size": 256, 00:07:45.819 "bdev_auto_examine": true, 00:07:45.819 "iobuf_small_cache_size": 128, 00:07:45.819 "iobuf_large_cache_size": 16 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "bdev_raid_set_options", 00:07:45.819 "params": { 00:07:45.819 "process_window_size_kb": 1024 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "bdev_iscsi_set_options", 00:07:45.819 "params": { 00:07:45.819 "timeout_sec": 30 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "bdev_nvme_set_options", 00:07:45.819 "params": { 00:07:45.819 "action_on_timeout": "none", 00:07:45.819 "timeout_us": 0, 00:07:45.819 "timeout_admin_us": 0, 00:07:45.819 "keep_alive_timeout_ms": 10000, 00:07:45.819 "arbitration_burst": 0, 00:07:45.819 "low_priority_weight": 0, 00:07:45.819 "medium_priority_weight": 0, 00:07:45.819 "high_priority_weight": 0, 00:07:45.819 "nvme_adminq_poll_period_us": 10000, 00:07:45.819 "nvme_ioq_poll_period_us": 0, 00:07:45.819 "io_queue_requests": 0, 00:07:45.819 "delay_cmd_submit": true, 00:07:45.819 "transport_retry_count": 4, 00:07:45.819 "bdev_retry_count": 3, 00:07:45.819 "transport_ack_timeout": 0, 00:07:45.819 "ctrlr_loss_timeout_sec": 0, 00:07:45.819 "reconnect_delay_sec": 0, 00:07:45.819 "fast_io_fail_timeout_sec": 0, 00:07:45.819 "disable_auto_failback": false, 00:07:45.819 "generate_uuids": false, 00:07:45.819 "transport_tos": 0, 00:07:45.819 "nvme_error_stat": false, 00:07:45.819 "rdma_srq_size": 0, 00:07:45.819 "io_path_stat": false, 00:07:45.819 "allow_accel_sequence": false, 00:07:45.819 "rdma_max_cq_size": 0, 00:07:45.819 "rdma_cm_event_timeout_ms": 0, 00:07:45.819 "dhchap_digests": [ 00:07:45.819 "sha256", 00:07:45.819 "sha384", 00:07:45.819 "sha512" 00:07:45.819 ], 00:07:45.819 "dhchap_dhgroups": [ 00:07:45.819 "null", 00:07:45.819 "ffdhe2048", 00:07:45.819 "ffdhe3072", 00:07:45.819 "ffdhe4096", 00:07:45.819 "ffdhe6144", 00:07:45.819 "ffdhe8192" 00:07:45.819 ] 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "bdev_nvme_set_hotplug", 00:07:45.819 "params": { 00:07:45.819 "period_us": 100000, 00:07:45.819 "enable": false 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "bdev_wait_for_examine" 00:07:45.819 } 00:07:45.819 ] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "scsi", 00:07:45.819 "config": null 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "scheduler", 00:07:45.819 "config": [ 00:07:45.819 { 00:07:45.819 "method": "framework_set_scheduler", 00:07:45.819 "params": { 00:07:45.819 "name": "static" 00:07:45.819 } 00:07:45.819 } 00:07:45.819 ] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "vhost_scsi", 00:07:45.819 "config": [] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "vhost_blk", 00:07:45.819 "config": [] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "ublk", 00:07:45.819 "config": [] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "nbd", 00:07:45.819 "config": [] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "nvmf", 00:07:45.819 "config": [ 00:07:45.819 { 00:07:45.819 "method": "nvmf_set_config", 00:07:45.819 "params": { 00:07:45.819 "discovery_filter": "match_any", 00:07:45.819 "admin_cmd_passthru": { 00:07:45.819 "identify_ctrlr": false 00:07:45.819 } 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "nvmf_set_max_subsystems", 00:07:45.819 "params": { 00:07:45.819 "max_subsystems": 1024 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "nvmf_set_crdt", 00:07:45.819 "params": { 00:07:45.819 "crdt1": 0, 00:07:45.819 "crdt2": 0, 00:07:45.819 "crdt3": 0 00:07:45.819 } 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "method": "nvmf_create_transport", 00:07:45.819 "params": { 00:07:45.819 "trtype": "TCP", 00:07:45.819 "max_queue_depth": 128, 00:07:45.819 "max_io_qpairs_per_ctrlr": 127, 00:07:45.819 "in_capsule_data_size": 4096, 00:07:45.819 "max_io_size": 131072, 00:07:45.819 "io_unit_size": 131072, 00:07:45.819 "max_aq_depth": 128, 00:07:45.819 "num_shared_buffers": 511, 00:07:45.819 "buf_cache_size": 4294967295, 00:07:45.819 "dif_insert_or_strip": false, 00:07:45.819 "zcopy": false, 00:07:45.819 "c2h_success": true, 00:07:45.819 "sock_priority": 0, 00:07:45.819 "abort_timeout_sec": 1, 00:07:45.819 "ack_timeout": 0, 00:07:45.819 "data_wr_pool_size": 0 00:07:45.819 } 00:07:45.819 } 00:07:45.819 ] 00:07:45.819 }, 00:07:45.819 { 00:07:45.819 "subsystem": "iscsi", 00:07:45.819 "config": [ 00:07:45.819 { 00:07:45.819 "method": "iscsi_set_options", 00:07:45.819 "params": { 00:07:45.819 "node_base": "iqn.2016-06.io.spdk", 00:07:45.819 "max_sessions": 128, 00:07:45.820 "max_connections_per_session": 2, 00:07:45.820 "max_queue_depth": 64, 00:07:45.820 "default_time2wait": 2, 00:07:45.820 "default_time2retain": 20, 00:07:45.820 "first_burst_length": 8192, 00:07:45.820 "immediate_data": true, 00:07:45.820 "allow_duplicated_isid": false, 00:07:45.820 "error_recovery_level": 0, 00:07:45.820 "nop_timeout": 60, 00:07:45.820 "nop_in_interval": 30, 00:07:45.820 "disable_chap": false, 00:07:45.820 "require_chap": false, 00:07:45.820 "mutual_chap": false, 00:07:45.820 "chap_group": 0, 00:07:45.820 "max_large_datain_per_connection": 64, 00:07:45.820 "max_r2t_per_connection": 4, 00:07:45.820 "pdu_pool_size": 36864, 00:07:45.820 "immediate_data_pool_size": 16384, 00:07:45.820 "data_out_pool_size": 2048 00:07:45.820 } 00:07:45.820 } 00:07:45.820 ] 00:07:45.820 } 00:07:45.820 ] 00:07:45.820 } 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1229047 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1229047 ']' 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1229047 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1229047 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1229047' 00:07:45.820 killing process with pid 1229047 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1229047 00:07:45.820 13:35:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1229047 00:07:46.078 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1229318 00:07:46.078 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:46.078 13:35:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1229318 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1229318 ']' 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1229318 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1229318 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1229318' 00:07:51.355 killing process with pid 1229318 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1229318 00:07:51.355 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1229318 00:07:51.614 13:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:51.615 00:07:51.615 real 0m6.887s 00:07:51.615 user 0m6.686s 00:07:51.615 sys 0m0.740s 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:51.615 ************************************ 00:07:51.615 END TEST skip_rpc_with_json 00:07:51.615 ************************************ 00:07:51.615 13:35:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.615 ************************************ 00:07:51.615 START TEST skip_rpc_with_delay 00:07:51.615 ************************************ 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:51.615 [2024-06-11 13:35:44.443116] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:51.615 [2024-06-11 13:35:44.443205] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:51.615 00:07:51.615 real 0m0.076s 00:07:51.615 user 0m0.044s 00:07:51.615 sys 0m0.031s 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:51.615 13:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:51.615 ************************************ 00:07:51.615 END TEST skip_rpc_with_delay 00:07:51.615 ************************************ 00:07:51.615 13:35:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:51.615 13:35:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:51.615 13:35:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:51.615 13:35:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.874 ************************************ 00:07:51.874 START TEST exit_on_failed_rpc_init 00:07:51.874 ************************************ 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1230194 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1230194 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1230194 ']' 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:51.874 13:35:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:51.874 [2024-06-11 13:35:44.590409] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:51.874 [2024-06-11 13:35:44.590466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230194 ] 00:07:51.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.874 [2024-06-11 13:35:44.692598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.874 [2024-06-11 13:35:44.779571] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:52.808 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.808 [2024-06-11 13:35:45.486990] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:52.808 [2024-06-11 13:35:45.487057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230450 ] 00:07:52.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.809 [2024-06-11 13:35:45.579541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.809 [2024-06-11 13:35:45.660409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.809 [2024-06-11 13:35:45.660489] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:52.809 [2024-06-11 13:35:45.660516] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:52.809 [2024-06-11 13:35:45.660527] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.067 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:07:53.067 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:53.067 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1230194 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1230194 ']' 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1230194 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1230194 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1230194' 00:07:53.068 killing process with pid 1230194 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1230194 00:07:53.068 13:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1230194 00:07:53.327 00:07:53.327 real 0m1.592s 00:07:53.327 user 0m1.810s 00:07:53.327 sys 0m0.517s 00:07:53.327 13:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.327 13:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:53.327 ************************************ 00:07:53.327 END TEST exit_on_failed_rpc_init 00:07:53.327 ************************************ 00:07:53.327 13:35:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:53.327 00:07:53.327 real 0m14.360s 00:07:53.327 user 0m13.807s 00:07:53.327 sys 0m1.893s 00:07:53.327 13:35:46 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.327 13:35:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.327 ************************************ 00:07:53.327 END TEST skip_rpc 00:07:53.327 ************************************ 00:07:53.327 13:35:46 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:53.327 13:35:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.327 13:35:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.327 13:35:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.586 ************************************ 00:07:53.586 START TEST rpc_client 00:07:53.586 ************************************ 00:07:53.586 13:35:46 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:53.586 * Looking for test storage... 00:07:53.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:53.586 13:35:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:53.586 OK 00:07:53.586 13:35:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:53.586 00:07:53.586 real 0m0.132s 00:07:53.586 user 0m0.055s 00:07:53.586 sys 0m0.086s 00:07:53.586 13:35:46 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.586 13:35:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:53.586 ************************************ 00:07:53.586 END TEST rpc_client 00:07:53.586 ************************************ 00:07:53.586 13:35:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:53.586 13:35:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.586 13:35:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.586 13:35:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.586 ************************************ 00:07:53.586 START TEST json_config 00:07:53.586 ************************************ 00:07:53.586 13:35:46 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.845 13:35:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.845 13:35:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.845 13:35:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.845 13:35:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.845 13:35:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.845 13:35:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.845 13:35:46 json_config -- paths/export.sh@5 -- # export PATH 00:07:53.845 13:35:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@47 -- # : 0 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.845 13:35:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:53.845 13:35:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:53.846 INFO: JSON configuration test init 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.846 13:35:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:53.846 13:35:46 json_config -- json_config/common.sh@9 -- # local app=target 00:07:53.846 13:35:46 json_config -- json_config/common.sh@10 -- # shift 00:07:53.846 13:35:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:53.846 13:35:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:53.846 13:35:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:53.846 13:35:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:53.846 13:35:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:53.846 13:35:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1230816 00:07:53.846 13:35:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:53.846 Waiting for target to run... 00:07:53.846 13:35:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1230816 /var/tmp/spdk_tgt.sock 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@830 -- # '[' -z 1230816 ']' 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:53.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:53.846 13:35:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:53.846 13:35:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.846 [2024-06-11 13:35:46.645498] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:53.846 [2024-06-11 13:35:46.645565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230816 ] 00:07:53.846 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.105 [2024-06-11 13:35:46.970260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.363 [2024-06-11 13:35:47.047561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@863 -- # return 0 00:07:54.931 13:35:47 json_config -- json_config/common.sh@26 -- # echo '' 00:07:54.931 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:54.931 13:35:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:54.931 13:35:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:54.931 13:35:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:58.217 13:35:50 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:58.217 13:35:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:58.217 13:35:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:58.218 13:35:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:58.218 13:35:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:58.218 13:35:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:58.218 13:35:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:58.218 13:35:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:58.218 13:35:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:58.218 13:35:50 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:58.218 13:35:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:58.218 13:35:51 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:58.218 13:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:07:58.218 13:35:51 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:58.218 13:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:58.476 MallocForNvmf0 00:07:58.476 13:35:51 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:58.476 13:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:58.734 MallocForNvmf1 00:07:58.734 13:35:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:58.734 13:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:58.992 [2024-06-11 13:35:51.692324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.992 13:35:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.992 13:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.251 13:35:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:59.251 13:35:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:59.508 13:35:52 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:59.508 13:35:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:59.508 13:35:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:59.508 13:35:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:59.766 [2024-06-11 13:35:52.603264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:59.766 13:35:52 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:59.766 13:35:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:59.766 13:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:59.766 13:35:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:59.766 13:35:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:59.766 13:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.024 13:35:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:08:00.024 13:35:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:00.024 13:35:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:00.024 MallocBdevForConfigChangeCheck 00:08:00.024 13:35:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:08:00.024 13:35:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:00.024 13:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.282 13:35:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:08:00.282 13:35:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:00.540 13:35:53 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:08:00.540 INFO: shutting down applications... 00:08:00.540 13:35:53 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:08:00.540 13:35:53 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:08:00.540 13:35:53 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:08:00.540 13:35:53 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:03.070 Calling clear_iscsi_subsystem 00:08:03.070 Calling clear_nvmf_subsystem 00:08:03.070 Calling clear_nbd_subsystem 00:08:03.070 Calling clear_ublk_subsystem 00:08:03.070 Calling clear_vhost_blk_subsystem 00:08:03.070 Calling clear_vhost_scsi_subsystem 00:08:03.070 Calling clear_bdev_subsystem 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@343 -- # count=100 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@345 -- # break 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:03.070 13:35:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:03.070 13:35:55 json_config -- json_config/common.sh@31 -- # local app=target 00:08:03.070 13:35:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:03.070 13:35:55 json_config -- json_config/common.sh@35 -- # [[ -n 1230816 ]] 00:08:03.070 13:35:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1230816 00:08:03.070 13:35:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:03.070 13:35:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:03.070 13:35:55 json_config -- json_config/common.sh@41 -- # kill -0 1230816 00:08:03.070 13:35:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:03.638 13:35:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:03.638 13:35:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:03.638 13:35:56 json_config -- json_config/common.sh@41 -- # kill -0 1230816 00:08:03.638 13:35:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:03.638 13:35:56 json_config -- json_config/common.sh@43 -- # break 00:08:03.638 13:35:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:03.638 13:35:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:03.638 SPDK target shutdown done 00:08:03.638 13:35:56 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:03.638 INFO: relaunching applications... 00:08:03.638 13:35:56 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:03.638 13:35:56 json_config -- json_config/common.sh@9 -- # local app=target 00:08:03.638 13:35:56 json_config -- json_config/common.sh@10 -- # shift 00:08:03.638 13:35:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:03.638 13:35:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:03.638 13:35:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:03.638 13:35:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:03.638 13:35:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:03.638 13:35:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1232546 00:08:03.638 13:35:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:03.638 Waiting for target to run... 00:08:03.638 13:35:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:03.638 13:35:56 json_config -- json_config/common.sh@25 -- # waitforlisten 1232546 /var/tmp/spdk_tgt.sock 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@830 -- # '[' -z 1232546 ']' 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:03.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:03.638 13:35:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:03.638 [2024-06-11 13:35:56.399320] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:03.638 [2024-06-11 13:35:56.399393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232546 ] 00:08:03.638 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.206 [2024-06-11 13:35:56.876279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.206 [2024-06-11 13:35:56.978055] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.497 [2024-06-11 13:36:00.025901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.497 [2024-06-11 13:36:00.058294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:08.065 13:36:00 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:08.065 13:36:00 json_config -- common/autotest_common.sh@863 -- # return 0 00:08:08.065 13:36:00 json_config -- json_config/common.sh@26 -- # echo '' 00:08:08.065 00:08:08.065 13:36:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:08:08.065 13:36:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:08.065 INFO: Checking if target configuration is the same... 00:08:08.065 13:36:00 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.065 13:36:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:08:08.065 13:36:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:08.065 + '[' 2 -ne 2 ']' 00:08:08.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:08.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:08.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:08.065 +++ basename /dev/fd/62 00:08:08.065 ++ mktemp /tmp/62.XXX 00:08:08.065 + tmp_file_1=/tmp/62.Why 00:08:08.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:08.065 + tmp_file_2=/tmp/spdk_tgt_config.json.Ad4 00:08:08.065 + ret=0 00:08:08.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:08.324 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:08.324 + diff -u /tmp/62.Why /tmp/spdk_tgt_config.json.Ad4 00:08:08.324 + echo 'INFO: JSON config files are the same' 00:08:08.324 INFO: JSON config files are the same 00:08:08.324 + rm /tmp/62.Why /tmp/spdk_tgt_config.json.Ad4 00:08:08.324 + exit 0 00:08:08.324 13:36:01 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:08:08.324 13:36:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:08.324 INFO: changing configuration and checking if this can be detected... 00:08:08.324 13:36:01 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:08.324 13:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:08.584 13:36:01 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.584 13:36:01 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:08:08.584 13:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:08.584 + '[' 2 -ne 2 ']' 00:08:08.584 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:08.584 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:08.584 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:08.584 +++ basename /dev/fd/62 00:08:08.584 ++ mktemp /tmp/62.XXX 00:08:08.584 + tmp_file_1=/tmp/62.Dux 00:08:08.584 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:08.584 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:08.584 + tmp_file_2=/tmp/spdk_tgt_config.json.FxE 00:08:08.584 + ret=0 00:08:08.584 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:09.152 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:09.152 + diff -u /tmp/62.Dux /tmp/spdk_tgt_config.json.FxE 00:08:09.152 + ret=1 00:08:09.152 + echo '=== Start of file: /tmp/62.Dux ===' 00:08:09.152 + cat /tmp/62.Dux 00:08:09.152 + echo '=== End of file: /tmp/62.Dux ===' 00:08:09.152 + echo '' 00:08:09.152 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FxE ===' 00:08:09.152 + cat /tmp/spdk_tgt_config.json.FxE 00:08:09.152 + echo '=== End of file: /tmp/spdk_tgt_config.json.FxE ===' 00:08:09.152 + echo '' 00:08:09.152 + rm /tmp/62.Dux /tmp/spdk_tgt_config.json.FxE 00:08:09.152 + exit 1 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:08:09.152 INFO: configuration change detected. 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 1232546 ]] 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:09.152 13:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.152 13:36:01 json_config -- json_config/json_config.sh@323 -- # killprocess 1232546 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@949 -- # '[' -z 1232546 ']' 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@953 -- # kill -0 1232546 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@954 -- # uname 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1232546 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1232546' 00:08:09.153 killing process with pid 1232546 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@968 -- # kill 1232546 00:08:09.153 13:36:01 json_config -- common/autotest_common.sh@973 -- # wait 1232546 00:08:11.695 13:36:04 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:11.695 13:36:04 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:08:11.695 13:36:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:11.695 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 13:36:04 json_config -- json_config/json_config.sh@328 -- # return 0 00:08:11.695 13:36:04 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:08:11.695 INFO: Success 00:08:11.695 00:08:11.695 real 0m17.657s 00:08:11.695 user 0m19.141s 00:08:11.695 sys 0m2.521s 00:08:11.695 13:36:04 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:11.695 13:36:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 ************************************ 00:08:11.695 END TEST json_config 00:08:11.695 ************************************ 00:08:11.695 13:36:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:11.695 13:36:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:11.695 13:36:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:11.695 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:08:11.695 ************************************ 00:08:11.695 START TEST json_config_extra_key 00:08:11.695 ************************************ 00:08:11.695 13:36:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:11.695 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.695 13:36:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.695 13:36:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.695 13:36:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.695 13:36:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.695 13:36:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.695 13:36:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.695 13:36:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:11.695 13:36:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.695 13:36:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:11.696 INFO: launching applications... 00:08:11.696 13:36:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1234298 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:11.696 Waiting for target to run... 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1234298 /var/tmp/spdk_tgt.sock 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1234298 ']' 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:11.696 13:36:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:11.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:11.696 13:36:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:11.696 [2024-06-11 13:36:04.377734] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:11.696 [2024-06-11 13:36:04.377808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234298 ] 00:08:11.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.954 [2024-06-11 13:36:04.853218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.220 [2024-06-11 13:36:04.940030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.522 13:36:05 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:12.522 13:36:05 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:12.522 00:08:12.522 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:12.522 INFO: shutting down applications... 00:08:12.522 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1234298 ]] 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1234298 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1234298 00:08:12.522 13:36:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1234298 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:13.089 13:36:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:13.089 SPDK target shutdown done 00:08:13.089 13:36:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:13.089 Success 00:08:13.089 00:08:13.089 real 0m1.580s 00:08:13.089 user 0m1.241s 00:08:13.089 sys 0m0.617s 00:08:13.089 13:36:05 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:13.089 13:36:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:13.089 ************************************ 00:08:13.089 END TEST json_config_extra_key 00:08:13.089 ************************************ 00:08:13.089 13:36:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:13.089 13:36:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:13.089 13:36:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:13.089 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:08:13.089 ************************************ 00:08:13.089 START TEST alias_rpc 00:08:13.089 ************************************ 00:08:13.089 13:36:05 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:13.090 * Looking for test storage... 00:08:13.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:13.090 13:36:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.090 13:36:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1234872 00:08:13.090 13:36:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1234872 00:08:13.090 13:36:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1234872 ']' 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:13.090 13:36:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.347 [2024-06-11 13:36:06.026340] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:13.347 [2024-06-11 13:36:06.026409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234872 ] 00:08:13.347 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.347 [2024-06-11 13:36:06.128356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.347 [2024-06-11 13:36:06.215234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.277 13:36:06 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:14.277 13:36:06 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:14.277 13:36:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:14.277 13:36:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1234872 00:08:14.277 13:36:07 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1234872 ']' 00:08:14.277 13:36:07 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1234872 00:08:14.277 13:36:07 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:08:14.277 13:36:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:14.277 13:36:07 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1234872 00:08:14.536 13:36:07 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:14.536 13:36:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:14.536 13:36:07 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1234872' 00:08:14.536 killing process with pid 1234872 00:08:14.536 13:36:07 alias_rpc -- common/autotest_common.sh@968 -- # kill 1234872 00:08:14.536 13:36:07 alias_rpc -- common/autotest_common.sh@973 -- # wait 1234872 00:08:14.794 00:08:14.794 real 0m1.691s 00:08:14.794 user 0m1.878s 00:08:14.795 sys 0m0.512s 00:08:14.795 13:36:07 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:14.795 13:36:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.795 ************************************ 00:08:14.795 END TEST alias_rpc 00:08:14.795 ************************************ 00:08:14.795 13:36:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:14.795 13:36:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:14.795 13:36:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:14.795 13:36:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:14.795 13:36:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.795 ************************************ 00:08:14.795 START TEST spdkcli_tcp 00:08:14.795 ************************************ 00:08:14.795 13:36:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:15.052 * Looking for test storage... 00:08:15.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:15.052 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:15.052 13:36:07 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.053 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1235214 00:08:15.053 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:15.053 13:36:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1235214 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1235214 ']' 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:15.053 13:36:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.053 [2024-06-11 13:36:07.800966] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:15.053 [2024-06-11 13:36:07.801037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235214 ] 00:08:15.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.053 [2024-06-11 13:36:07.904523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.310 [2024-06-11 13:36:07.993943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.310 [2024-06-11 13:36:07.993949] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.873 13:36:08 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:15.873 13:36:08 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:08:15.873 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1235460 00:08:15.873 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:15.874 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:16.133 [ 00:08:16.133 "bdev_malloc_delete", 00:08:16.133 "bdev_malloc_create", 00:08:16.133 "bdev_null_resize", 00:08:16.133 "bdev_null_delete", 00:08:16.133 "bdev_null_create", 00:08:16.133 "bdev_nvme_cuse_unregister", 00:08:16.133 "bdev_nvme_cuse_register", 00:08:16.133 "bdev_opal_new_user", 00:08:16.133 "bdev_opal_set_lock_state", 00:08:16.133 "bdev_opal_delete", 00:08:16.133 "bdev_opal_get_info", 00:08:16.133 "bdev_opal_create", 00:08:16.133 "bdev_nvme_opal_revert", 00:08:16.133 "bdev_nvme_opal_init", 00:08:16.133 "bdev_nvme_send_cmd", 00:08:16.133 "bdev_nvme_get_path_iostat", 00:08:16.133 "bdev_nvme_get_mdns_discovery_info", 00:08:16.133 "bdev_nvme_stop_mdns_discovery", 00:08:16.133 "bdev_nvme_start_mdns_discovery", 00:08:16.133 "bdev_nvme_set_multipath_policy", 00:08:16.133 "bdev_nvme_set_preferred_path", 00:08:16.133 "bdev_nvme_get_io_paths", 00:08:16.133 "bdev_nvme_remove_error_injection", 00:08:16.133 "bdev_nvme_add_error_injection", 00:08:16.133 "bdev_nvme_get_discovery_info", 00:08:16.133 "bdev_nvme_stop_discovery", 00:08:16.133 "bdev_nvme_start_discovery", 00:08:16.133 "bdev_nvme_get_controller_health_info", 00:08:16.133 "bdev_nvme_disable_controller", 00:08:16.133 "bdev_nvme_enable_controller", 00:08:16.133 "bdev_nvme_reset_controller", 00:08:16.133 "bdev_nvme_get_transport_statistics", 00:08:16.133 "bdev_nvme_apply_firmware", 00:08:16.133 "bdev_nvme_detach_controller", 00:08:16.133 "bdev_nvme_get_controllers", 00:08:16.133 "bdev_nvme_attach_controller", 00:08:16.133 "bdev_nvme_set_hotplug", 00:08:16.133 "bdev_nvme_set_options", 00:08:16.133 "bdev_passthru_delete", 00:08:16.133 "bdev_passthru_create", 00:08:16.133 "bdev_lvol_set_parent_bdev", 00:08:16.133 "bdev_lvol_set_parent", 00:08:16.133 "bdev_lvol_check_shallow_copy", 00:08:16.133 "bdev_lvol_start_shallow_copy", 00:08:16.133 "bdev_lvol_grow_lvstore", 00:08:16.133 "bdev_lvol_get_lvols", 00:08:16.133 "bdev_lvol_get_lvstores", 00:08:16.133 "bdev_lvol_delete", 00:08:16.133 "bdev_lvol_set_read_only", 00:08:16.133 "bdev_lvol_resize", 00:08:16.133 "bdev_lvol_decouple_parent", 00:08:16.133 "bdev_lvol_inflate", 00:08:16.133 "bdev_lvol_rename", 00:08:16.133 "bdev_lvol_clone_bdev", 00:08:16.133 "bdev_lvol_clone", 00:08:16.133 "bdev_lvol_snapshot", 00:08:16.133 "bdev_lvol_create", 00:08:16.133 "bdev_lvol_delete_lvstore", 00:08:16.133 "bdev_lvol_rename_lvstore", 00:08:16.133 "bdev_lvol_create_lvstore", 00:08:16.133 "bdev_raid_set_options", 00:08:16.133 "bdev_raid_remove_base_bdev", 00:08:16.133 "bdev_raid_add_base_bdev", 00:08:16.133 "bdev_raid_delete", 00:08:16.133 "bdev_raid_create", 00:08:16.133 "bdev_raid_get_bdevs", 00:08:16.133 "bdev_error_inject_error", 00:08:16.133 "bdev_error_delete", 00:08:16.133 "bdev_error_create", 00:08:16.133 "bdev_split_delete", 00:08:16.133 "bdev_split_create", 00:08:16.133 "bdev_delay_delete", 00:08:16.133 "bdev_delay_create", 00:08:16.133 "bdev_delay_update_latency", 00:08:16.133 "bdev_zone_block_delete", 00:08:16.133 "bdev_zone_block_create", 00:08:16.133 "blobfs_create", 00:08:16.133 "blobfs_detect", 00:08:16.133 "blobfs_set_cache_size", 00:08:16.133 "bdev_aio_delete", 00:08:16.133 "bdev_aio_rescan", 00:08:16.133 "bdev_aio_create", 00:08:16.133 "bdev_ftl_set_property", 00:08:16.133 "bdev_ftl_get_properties", 00:08:16.133 "bdev_ftl_get_stats", 00:08:16.133 "bdev_ftl_unmap", 00:08:16.133 "bdev_ftl_unload", 00:08:16.133 "bdev_ftl_delete", 00:08:16.133 "bdev_ftl_load", 00:08:16.133 "bdev_ftl_create", 00:08:16.133 "bdev_virtio_attach_controller", 00:08:16.133 "bdev_virtio_scsi_get_devices", 00:08:16.133 "bdev_virtio_detach_controller", 00:08:16.133 "bdev_virtio_blk_set_hotplug", 00:08:16.133 "bdev_iscsi_delete", 00:08:16.133 "bdev_iscsi_create", 00:08:16.133 "bdev_iscsi_set_options", 00:08:16.133 "accel_error_inject_error", 00:08:16.133 "ioat_scan_accel_module", 00:08:16.133 "dsa_scan_accel_module", 00:08:16.133 "iaa_scan_accel_module", 00:08:16.133 "keyring_file_remove_key", 00:08:16.133 "keyring_file_add_key", 00:08:16.133 "keyring_linux_set_options", 00:08:16.133 "iscsi_get_histogram", 00:08:16.133 "iscsi_enable_histogram", 00:08:16.133 "iscsi_set_options", 00:08:16.133 "iscsi_get_auth_groups", 00:08:16.133 "iscsi_auth_group_remove_secret", 00:08:16.133 "iscsi_auth_group_add_secret", 00:08:16.133 "iscsi_delete_auth_group", 00:08:16.133 "iscsi_create_auth_group", 00:08:16.133 "iscsi_set_discovery_auth", 00:08:16.133 "iscsi_get_options", 00:08:16.133 "iscsi_target_node_request_logout", 00:08:16.133 "iscsi_target_node_set_redirect", 00:08:16.133 "iscsi_target_node_set_auth", 00:08:16.133 "iscsi_target_node_add_lun", 00:08:16.133 "iscsi_get_stats", 00:08:16.133 "iscsi_get_connections", 00:08:16.133 "iscsi_portal_group_set_auth", 00:08:16.133 "iscsi_start_portal_group", 00:08:16.133 "iscsi_delete_portal_group", 00:08:16.133 "iscsi_create_portal_group", 00:08:16.133 "iscsi_get_portal_groups", 00:08:16.133 "iscsi_delete_target_node", 00:08:16.133 "iscsi_target_node_remove_pg_ig_maps", 00:08:16.133 "iscsi_target_node_add_pg_ig_maps", 00:08:16.133 "iscsi_create_target_node", 00:08:16.133 "iscsi_get_target_nodes", 00:08:16.133 "iscsi_delete_initiator_group", 00:08:16.133 "iscsi_initiator_group_remove_initiators", 00:08:16.133 "iscsi_initiator_group_add_initiators", 00:08:16.133 "iscsi_create_initiator_group", 00:08:16.133 "iscsi_get_initiator_groups", 00:08:16.133 "nvmf_set_crdt", 00:08:16.133 "nvmf_set_config", 00:08:16.133 "nvmf_set_max_subsystems", 00:08:16.133 "nvmf_stop_mdns_prr", 00:08:16.133 "nvmf_publish_mdns_prr", 00:08:16.133 "nvmf_subsystem_get_listeners", 00:08:16.133 "nvmf_subsystem_get_qpairs", 00:08:16.133 "nvmf_subsystem_get_controllers", 00:08:16.133 "nvmf_get_stats", 00:08:16.133 "nvmf_get_transports", 00:08:16.133 "nvmf_create_transport", 00:08:16.133 "nvmf_get_targets", 00:08:16.133 "nvmf_delete_target", 00:08:16.133 "nvmf_create_target", 00:08:16.133 "nvmf_subsystem_allow_any_host", 00:08:16.133 "nvmf_subsystem_remove_host", 00:08:16.133 "nvmf_subsystem_add_host", 00:08:16.133 "nvmf_ns_remove_host", 00:08:16.133 "nvmf_ns_add_host", 00:08:16.133 "nvmf_subsystem_remove_ns", 00:08:16.133 "nvmf_subsystem_add_ns", 00:08:16.133 "nvmf_subsystem_listener_set_ana_state", 00:08:16.134 "nvmf_discovery_get_referrals", 00:08:16.134 "nvmf_discovery_remove_referral", 00:08:16.134 "nvmf_discovery_add_referral", 00:08:16.134 "nvmf_subsystem_remove_listener", 00:08:16.134 "nvmf_subsystem_add_listener", 00:08:16.134 "nvmf_delete_subsystem", 00:08:16.134 "nvmf_create_subsystem", 00:08:16.134 "nvmf_get_subsystems", 00:08:16.134 "env_dpdk_get_mem_stats", 00:08:16.134 "nbd_get_disks", 00:08:16.134 "nbd_stop_disk", 00:08:16.134 "nbd_start_disk", 00:08:16.134 "ublk_recover_disk", 00:08:16.134 "ublk_get_disks", 00:08:16.134 "ublk_stop_disk", 00:08:16.134 "ublk_start_disk", 00:08:16.134 "ublk_destroy_target", 00:08:16.134 "ublk_create_target", 00:08:16.134 "virtio_blk_create_transport", 00:08:16.134 "virtio_blk_get_transports", 00:08:16.134 "vhost_controller_set_coalescing", 00:08:16.134 "vhost_get_controllers", 00:08:16.134 "vhost_delete_controller", 00:08:16.134 "vhost_create_blk_controller", 00:08:16.134 "vhost_scsi_controller_remove_target", 00:08:16.134 "vhost_scsi_controller_add_target", 00:08:16.134 "vhost_start_scsi_controller", 00:08:16.134 "vhost_create_scsi_controller", 00:08:16.134 "thread_set_cpumask", 00:08:16.134 "framework_get_scheduler", 00:08:16.134 "framework_set_scheduler", 00:08:16.134 "framework_get_reactors", 00:08:16.134 "thread_get_io_channels", 00:08:16.134 "thread_get_pollers", 00:08:16.134 "thread_get_stats", 00:08:16.134 "framework_monitor_context_switch", 00:08:16.134 "spdk_kill_instance", 00:08:16.134 "log_enable_timestamps", 00:08:16.134 "log_get_flags", 00:08:16.134 "log_clear_flag", 00:08:16.134 "log_set_flag", 00:08:16.134 "log_get_level", 00:08:16.134 "log_set_level", 00:08:16.134 "log_get_print_level", 00:08:16.134 "log_set_print_level", 00:08:16.134 "framework_enable_cpumask_locks", 00:08:16.134 "framework_disable_cpumask_locks", 00:08:16.134 "framework_wait_init", 00:08:16.134 "framework_start_init", 00:08:16.134 "scsi_get_devices", 00:08:16.134 "bdev_get_histogram", 00:08:16.134 "bdev_enable_histogram", 00:08:16.134 "bdev_set_qos_limit", 00:08:16.134 "bdev_set_qd_sampling_period", 00:08:16.134 "bdev_get_bdevs", 00:08:16.134 "bdev_reset_iostat", 00:08:16.134 "bdev_get_iostat", 00:08:16.134 "bdev_examine", 00:08:16.134 "bdev_wait_for_examine", 00:08:16.134 "bdev_set_options", 00:08:16.134 "notify_get_notifications", 00:08:16.134 "notify_get_types", 00:08:16.134 "accel_get_stats", 00:08:16.134 "accel_set_options", 00:08:16.134 "accel_set_driver", 00:08:16.134 "accel_crypto_key_destroy", 00:08:16.134 "accel_crypto_keys_get", 00:08:16.134 "accel_crypto_key_create", 00:08:16.134 "accel_assign_opc", 00:08:16.134 "accel_get_module_info", 00:08:16.134 "accel_get_opc_assignments", 00:08:16.134 "vmd_rescan", 00:08:16.134 "vmd_remove_device", 00:08:16.134 "vmd_enable", 00:08:16.134 "sock_get_default_impl", 00:08:16.134 "sock_set_default_impl", 00:08:16.134 "sock_impl_set_options", 00:08:16.134 "sock_impl_get_options", 00:08:16.134 "iobuf_get_stats", 00:08:16.134 "iobuf_set_options", 00:08:16.134 "framework_get_pci_devices", 00:08:16.134 "framework_get_config", 00:08:16.134 "framework_get_subsystems", 00:08:16.134 "trace_get_info", 00:08:16.134 "trace_get_tpoint_group_mask", 00:08:16.134 "trace_disable_tpoint_group", 00:08:16.134 "trace_enable_tpoint_group", 00:08:16.134 "trace_clear_tpoint_mask", 00:08:16.134 "trace_set_tpoint_mask", 00:08:16.134 "keyring_get_keys", 00:08:16.134 "spdk_get_version", 00:08:16.134 "rpc_get_methods" 00:08:16.134 ] 00:08:16.134 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.134 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:16.134 13:36:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1235214 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1235214 ']' 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1235214 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:16.134 13:36:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1235214 00:08:16.134 13:36:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:16.134 13:36:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:16.134 13:36:09 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1235214' 00:08:16.134 killing process with pid 1235214 00:08:16.134 13:36:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1235214 00:08:16.134 13:36:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1235214 00:08:16.700 00:08:16.700 real 0m1.736s 00:08:16.700 user 0m3.215s 00:08:16.700 sys 0m0.565s 00:08:16.700 13:36:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:16.700 13:36:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.700 ************************************ 00:08:16.700 END TEST spdkcli_tcp 00:08:16.700 ************************************ 00:08:16.700 13:36:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:16.700 13:36:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:16.700 13:36:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.700 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:08:16.700 ************************************ 00:08:16.700 START TEST dpdk_mem_utility 00:08:16.700 ************************************ 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:16.700 * Looking for test storage... 00:08:16.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:16.700 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:16.700 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1235678 00:08:16.700 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1235678 00:08:16.700 13:36:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1235678 ']' 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:16.700 13:36:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:16.700 [2024-06-11 13:36:09.607815] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:16.700 [2024-06-11 13:36:09.607881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235678 ] 00:08:16.958 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.958 [2024-06-11 13:36:09.708429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.958 [2024-06-11 13:36:09.791072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.890 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:17.890 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:08:17.890 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:17.890 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:17.890 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:17.890 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:17.890 { 00:08:17.890 "filename": "/tmp/spdk_mem_dump.txt" 00:08:17.890 } 00:08:17.890 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:17.890 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:17.890 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:17.890 1 heaps totaling size 814.000000 MiB 00:08:17.890 size: 814.000000 MiB heap id: 0 00:08:17.890 end heaps---------- 00:08:17.890 8 mempools totaling size 598.116089 MiB 00:08:17.890 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:17.890 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:17.890 size: 84.521057 MiB name: bdev_io_1235678 00:08:17.890 size: 51.011292 MiB name: evtpool_1235678 00:08:17.890 size: 50.003479 MiB name: msgpool_1235678 00:08:17.890 size: 21.763794 MiB name: PDU_Pool 00:08:17.890 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:17.890 size: 0.026123 MiB name: Session_Pool 00:08:17.890 end mempools------- 00:08:17.890 6 memzones totaling size 4.142822 MiB 00:08:17.890 size: 1.000366 MiB name: RG_ring_0_1235678 00:08:17.890 size: 1.000366 MiB name: RG_ring_1_1235678 00:08:17.890 size: 1.000366 MiB name: RG_ring_4_1235678 00:08:17.890 size: 1.000366 MiB name: RG_ring_5_1235678 00:08:17.890 size: 0.125366 MiB name: RG_ring_2_1235678 00:08:17.890 size: 0.015991 MiB name: RG_ring_3_1235678 00:08:17.890 end memzones------- 00:08:17.890 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:17.890 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:08:17.890 list of free elements. size: 12.519348 MiB 00:08:17.890 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:17.890 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:17.890 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:17.890 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:17.890 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:17.890 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:17.890 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:17.890 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:17.890 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:17.890 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:08:17.890 element at address: 0x20000b200000 with size: 0.490723 MiB 00:08:17.890 element at address: 0x200000800000 with size: 0.487793 MiB 00:08:17.890 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:17.890 element at address: 0x200027e00000 with size: 0.410034 MiB 00:08:17.890 element at address: 0x200003a00000 with size: 0.355530 MiB 00:08:17.890 list of standard malloc elements. size: 199.218079 MiB 00:08:17.890 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:17.890 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:17.890 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:17.890 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:17.890 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:17.890 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:17.890 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:17.890 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:17.890 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:17.890 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:17.890 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:17.890 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:17.890 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:17.890 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:17.891 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:17.891 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:17.891 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:17.891 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:17.891 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:17.891 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:17.891 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:17.891 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:17.891 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:17.891 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200027e69040 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:17.891 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:17.891 list of memzone associated elements. size: 602.262573 MiB 00:08:17.891 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:17.891 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:17.891 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:17.891 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:17.891 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:17.891 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1235678_0 00:08:17.891 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:17.891 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1235678_0 00:08:17.891 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:17.891 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1235678_0 00:08:17.891 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:17.891 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:17.891 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:17.891 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:17.891 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:17.891 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1235678 00:08:17.891 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:17.891 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1235678 00:08:17.891 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:17.891 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1235678 00:08:17.891 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:17.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:17.891 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:17.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:17.891 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:17.891 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:17.891 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:17.891 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:17.891 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:17.891 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1235678 00:08:17.891 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:17.891 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1235678 00:08:17.891 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:17.891 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1235678 00:08:17.891 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:17.891 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1235678 00:08:17.891 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:17.891 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1235678 00:08:17.891 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:17.891 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:17.891 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:17.891 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:17.891 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:17.891 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:17.891 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:17.891 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1235678 00:08:17.891 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:17.891 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:17.891 element at address: 0x200027e69100 with size: 0.023743 MiB 00:08:17.891 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:17.891 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:17.891 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1235678 00:08:17.891 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:08:17.891 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:17.891 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:17.891 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1235678 00:08:17.891 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:17.891 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1235678 00:08:17.891 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:08:17.891 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:17.891 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:17.891 13:36:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1235678 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1235678 ']' 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1235678 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1235678 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1235678' 00:08:17.891 killing process with pid 1235678 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1235678 00:08:17.891 13:36:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1235678 00:08:18.149 00:08:18.149 real 0m1.588s 00:08:18.149 user 0m1.695s 00:08:18.149 sys 0m0.514s 00:08:18.149 13:36:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:18.149 13:36:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:18.149 ************************************ 00:08:18.149 END TEST dpdk_mem_utility 00:08:18.149 ************************************ 00:08:18.407 13:36:11 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:18.407 13:36:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:18.407 13:36:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.407 13:36:11 -- common/autotest_common.sh@10 -- # set +x 00:08:18.407 ************************************ 00:08:18.407 START TEST event 00:08:18.407 ************************************ 00:08:18.407 13:36:11 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:18.407 * Looking for test storage... 00:08:18.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:18.407 13:36:11 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:18.407 13:36:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:18.407 13:36:11 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:18.407 13:36:11 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:18.407 13:36:11 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:18.407 13:36:11 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.407 ************************************ 00:08:18.407 START TEST event_perf 00:08:18.407 ************************************ 00:08:18.407 13:36:11 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:18.407 Running I/O for 1 seconds...[2024-06-11 13:36:11.282506] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:18.407 [2024-06-11 13:36:11.282587] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236084 ] 00:08:18.664 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.664 [2024-06-11 13:36:11.385425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.664 [2024-06-11 13:36:11.472758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.664 [2024-06-11 13:36:11.472854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.664 [2024-06-11 13:36:11.472968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.664 [2024-06-11 13:36:11.472969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.648 Running I/O for 1 seconds... 00:08:19.648 lcore 0: 184838 00:08:19.648 lcore 1: 184838 00:08:19.648 lcore 2: 184837 00:08:19.648 lcore 3: 184838 00:08:19.648 done. 00:08:19.648 00:08:19.648 real 0m1.290s 00:08:19.648 user 0m4.167s 00:08:19.648 sys 0m0.119s 00:08:19.648 13:36:12 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.648 13:36:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:19.648 ************************************ 00:08:19.648 END TEST event_perf 00:08:19.648 ************************************ 00:08:19.906 13:36:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:19.906 13:36:12 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:19.906 13:36:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.906 13:36:12 event -- common/autotest_common.sh@10 -- # set +x 00:08:19.906 ************************************ 00:08:19.906 START TEST event_reactor 00:08:19.906 ************************************ 00:08:19.906 13:36:12 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:19.906 [2024-06-11 13:36:12.650315] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:19.906 [2024-06-11 13:36:12.650373] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236281 ] 00:08:19.906 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.906 [2024-06-11 13:36:12.751799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.164 [2024-06-11 13:36:12.835027] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.099 test_start 00:08:21.099 oneshot 00:08:21.099 tick 100 00:08:21.099 tick 100 00:08:21.099 tick 250 00:08:21.099 tick 100 00:08:21.099 tick 100 00:08:21.099 tick 250 00:08:21.099 tick 100 00:08:21.099 tick 500 00:08:21.099 tick 100 00:08:21.099 tick 100 00:08:21.099 tick 250 00:08:21.099 tick 100 00:08:21.099 tick 100 00:08:21.099 test_end 00:08:21.099 00:08:21.099 real 0m1.277s 00:08:21.099 user 0m1.170s 00:08:21.099 sys 0m0.103s 00:08:21.099 13:36:13 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:21.099 13:36:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:21.099 ************************************ 00:08:21.099 END TEST event_reactor 00:08:21.099 ************************************ 00:08:21.099 13:36:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:21.099 13:36:13 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:21.099 13:36:13 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:21.099 13:36:13 event -- common/autotest_common.sh@10 -- # set +x 00:08:21.099 ************************************ 00:08:21.099 START TEST event_reactor_perf 00:08:21.099 ************************************ 00:08:21.099 13:36:13 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:21.357 [2024-06-11 13:36:14.014898] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:21.357 [2024-06-11 13:36:14.014961] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236475 ] 00:08:21.357 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.357 [2024-06-11 13:36:14.116431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.357 [2024-06-11 13:36:14.197984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.731 test_start 00:08:22.731 test_end 00:08:22.731 Performance: 356605 events per second 00:08:22.731 00:08:22.731 real 0m1.282s 00:08:22.731 user 0m1.170s 00:08:22.731 sys 0m0.107s 00:08:22.731 13:36:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:22.731 13:36:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:22.731 ************************************ 00:08:22.731 END TEST event_reactor_perf 00:08:22.731 ************************************ 00:08:22.732 13:36:15 event -- event/event.sh@49 -- # uname -s 00:08:22.732 13:36:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:22.732 13:36:15 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:22.732 13:36:15 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:22.732 13:36:15 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:22.732 13:36:15 event -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 ************************************ 00:08:22.732 START TEST event_scheduler 00:08:22.732 ************************************ 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:22.732 * Looking for test storage... 00:08:22.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:22.732 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:22.732 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1236762 00:08:22.732 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.732 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:22.732 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1236762 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1236762 ']' 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:22.732 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.732 [2024-06-11 13:36:15.512406] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:22.732 [2024-06-11 13:36:15.512470] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1236762 ] 00:08:22.732 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.732 [2024-06-11 13:36:15.590559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.991 [2024-06-11 13:36:15.668453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.991 [2024-06-11 13:36:15.668552] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.991 [2024-06-11 13:36:15.668588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.991 [2024-06-11 13:36:15.668589] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.991 13:36:15 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:22.991 13:36:15 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:08:22.991 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:22.991 13:36:15 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.991 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.991 POWER: Env isn't set yet! 00:08:22.991 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:22.991 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:22.991 POWER: Cannot set governor of lcore 0 to userspace 00:08:22.991 POWER: Attempting to initialise PSTAT power management... 00:08:22.991 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:08:22.991 POWER: Initialized successfully for lcore 0 power management 00:08:22.991 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:08:22.991 POWER: Initialized successfully for lcore 1 power management 00:08:22.991 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:08:22.991 POWER: Initialized successfully for lcore 2 power management 00:08:22.991 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:08:22.991 POWER: Initialized successfully for lcore 3 power management 00:08:22.991 [2024-06-11 13:36:15.750643] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:22.991 [2024-06-11 13:36:15.750658] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:22.991 [2024-06-11 13:36:15.750669] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:22.991 13:36:15 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:22.991 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 [2024-06-11 13:36:15.818289] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:22.992 13:36:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 ************************************ 00:08:22.992 START TEST scheduler_create_thread 00:08:22.992 ************************************ 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 2 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 3 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.992 4 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:22.992 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 5 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 6 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 7 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 8 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 9 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.251 10 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:23.251 13:36:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:24.627 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:24.627 13:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:24.627 13:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:24.627 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:24.627 13:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.561 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:25.561 13:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:25.561 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:25.561 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.127 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.127 13:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:26.127 13:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:26.127 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.127 13:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:27.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:27.061 00:08:27.061 real 0m3.893s 00:08:27.061 user 0m0.021s 00:08:27.061 sys 0m0.010s 00:08:27.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:27.061 13:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:27.061 ************************************ 00:08:27.061 END TEST scheduler_create_thread 00:08:27.061 ************************************ 00:08:27.061 13:36:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:27.061 13:36:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1236762 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1236762 ']' 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1236762 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1236762 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1236762' 00:08:27.061 killing process with pid 1236762 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1236762 00:08:27.061 13:36:19 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1236762 00:08:27.319 [2024-06-11 13:36:20.129836] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:27.578 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:08:27.578 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:08:27.578 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:08:27.578 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:08:27.578 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:08:27.578 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:08:27.578 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:08:27.578 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:08:27.578 00:08:27.578 real 0m5.038s 00:08:27.578 user 0m9.553s 00:08:27.578 sys 0m0.416s 00:08:27.578 13:36:20 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:27.578 13:36:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:27.578 ************************************ 00:08:27.578 END TEST event_scheduler 00:08:27.578 ************************************ 00:08:27.578 13:36:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:27.578 13:36:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:27.578 13:36:20 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:27.578 13:36:20 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:27.578 13:36:20 event -- common/autotest_common.sh@10 -- # set +x 00:08:27.578 ************************************ 00:08:27.578 START TEST app_repeat 00:08:27.578 ************************************ 00:08:27.578 13:36:20 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1237783 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1237783' 00:08:27.837 Process app_repeat pid: 1237783 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:27.837 spdk_app_start Round 0 00:08:27.837 13:36:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1237783 /var/tmp/spdk-nbd.sock 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1237783 ']' 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:27.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:27.837 13:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:27.837 [2024-06-11 13:36:20.522040] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:27.837 [2024-06-11 13:36:20.522098] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237783 ] 00:08:27.837 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.837 [2024-06-11 13:36:20.625564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.837 [2024-06-11 13:36:20.711460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.837 [2024-06-11 13:36:20.711466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.772 13:36:21 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:28.772 13:36:21 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:08:28.772 13:36:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.772 Malloc0 00:08:28.772 13:36:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.030 Malloc1 00:08:29.030 13:36:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:29.030 13:36:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.030 13:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:29.030 13:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.031 13:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:29.289 /dev/nbd0 00:08:29.289 13:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.289 13:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.289 1+0 records in 00:08:29.289 1+0 records out 00:08:29.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219786 s, 18.6 MB/s 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:29.289 13:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:29.289 13:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.289 13:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.289 13:36:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:29.547 /dev/nbd1 00:08:29.547 13:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:29.547 13:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.547 1+0 records in 00:08:29.547 1+0 records out 00:08:29.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027597 s, 14.8 MB/s 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:29.547 13:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:29.547 13:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.547 13:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.548 13:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.548 13:36:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.548 13:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.806 { 00:08:29.806 "nbd_device": "/dev/nbd0", 00:08:29.806 "bdev_name": "Malloc0" 00:08:29.806 }, 00:08:29.806 { 00:08:29.806 "nbd_device": "/dev/nbd1", 00:08:29.806 "bdev_name": "Malloc1" 00:08:29.806 } 00:08:29.806 ]' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.806 { 00:08:29.806 "nbd_device": "/dev/nbd0", 00:08:29.806 "bdev_name": "Malloc0" 00:08:29.806 }, 00:08:29.806 { 00:08:29.806 "nbd_device": "/dev/nbd1", 00:08:29.806 "bdev_name": "Malloc1" 00:08:29.806 } 00:08:29.806 ]' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.806 /dev/nbd1' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.806 /dev/nbd1' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.806 13:36:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:30.064 256+0 records in 00:08:30.064 256+0 records out 00:08:30.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114322 s, 91.7 MB/s 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:30.064 256+0 records in 00:08:30.064 256+0 records out 00:08:30.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169262 s, 61.9 MB/s 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:30.064 256+0 records in 00:08:30.064 256+0 records out 00:08:30.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286057 s, 36.7 MB/s 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:30.064 13:36:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.065 13:36:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.361 13:36:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.635 13:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.893 13:36:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.893 13:36:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:31.152 13:36:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.152 [2024-06-11 13:36:24.048449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.410 [2024-06-11 13:36:24.125597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.410 [2024-06-11 13:36:24.125601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.410 [2024-06-11 13:36:24.169274] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:31.410 [2024-06-11 13:36:24.169324] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:33.940 13:36:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:33.940 13:36:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:33.940 spdk_app_start Round 1 00:08:33.940 13:36:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1237783 /var/tmp/spdk-nbd.sock 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1237783 ']' 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:33.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:33.940 13:36:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.198 13:36:27 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:34.198 13:36:27 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:08:34.198 13:36:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.456 Malloc0 00:08:34.456 13:36:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.714 Malloc1 00:08:34.714 13:36:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.714 13:36:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:34.972 /dev/nbd0 00:08:34.972 13:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:34.972 13:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:34.972 1+0 records in 00:08:34.972 1+0 records out 00:08:34.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233997 s, 17.5 MB/s 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:34.972 13:36:27 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:34.972 13:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.972 13:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.972 13:36:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.231 /dev/nbd1 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.231 1+0 records in 00:08:35.231 1+0 records out 00:08:35.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240253 s, 17.0 MB/s 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:35.231 13:36:28 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.231 13:36:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.490 { 00:08:35.490 "nbd_device": "/dev/nbd0", 00:08:35.490 "bdev_name": "Malloc0" 00:08:35.490 }, 00:08:35.490 { 00:08:35.490 "nbd_device": "/dev/nbd1", 00:08:35.490 "bdev_name": "Malloc1" 00:08:35.490 } 00:08:35.490 ]' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.490 { 00:08:35.490 "nbd_device": "/dev/nbd0", 00:08:35.490 "bdev_name": "Malloc0" 00:08:35.490 }, 00:08:35.490 { 00:08:35.490 "nbd_device": "/dev/nbd1", 00:08:35.490 "bdev_name": "Malloc1" 00:08:35.490 } 00:08:35.490 ]' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.490 /dev/nbd1' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.490 /dev/nbd1' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.490 256+0 records in 00:08:35.490 256+0 records out 00:08:35.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107468 s, 97.6 MB/s 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:35.490 256+0 records in 00:08:35.490 256+0 records out 00:08:35.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027324 s, 38.4 MB/s 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.490 13:36:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:35.748 256+0 records in 00:08:35.748 256+0 records out 00:08:35.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248712 s, 42.2 MB/s 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.748 13:36:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.006 13:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.007 13:36:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.264 13:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.264 13:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.264 13:36:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.265 13:36:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.522 13:36:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.522 13:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.523 13:36:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.523 13:36:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:36.781 13:36:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:37.040 [2024-06-11 13:36:29.700005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.040 [2024-06-11 13:36:29.777125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.040 [2024-06-11 13:36:29.777130] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.040 [2024-06-11 13:36:29.821984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:37.040 [2024-06-11 13:36:29.822033] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.322 13:36:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:40.322 13:36:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:40.322 spdk_app_start Round 2 00:08:40.322 13:36:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1237783 /var/tmp/spdk-nbd.sock 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1237783 ']' 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:40.322 13:36:32 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:08:40.322 13:36:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.322 Malloc0 00:08:40.322 13:36:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.322 Malloc1 00:08:40.322 13:36:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.322 13:36:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:40.580 /dev/nbd0 00:08:40.580 13:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:40.580 13:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:40.580 1+0 records in 00:08:40.580 1+0 records out 00:08:40.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240805 s, 17.0 MB/s 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:40.580 13:36:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:40.581 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.581 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.581 13:36:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:40.839 /dev/nbd1 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:40.839 1+0 records in 00:08:40.839 1+0 records out 00:08:40.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253825 s, 16.1 MB/s 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:08:40.839 13:36:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.839 13:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.098 13:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.098 { 00:08:41.098 "nbd_device": "/dev/nbd0", 00:08:41.098 "bdev_name": "Malloc0" 00:08:41.098 }, 00:08:41.098 { 00:08:41.098 "nbd_device": "/dev/nbd1", 00:08:41.098 "bdev_name": "Malloc1" 00:08:41.098 } 00:08:41.098 ]' 00:08:41.098 13:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.098 { 00:08:41.098 "nbd_device": "/dev/nbd0", 00:08:41.098 "bdev_name": "Malloc0" 00:08:41.098 }, 00:08:41.098 { 00:08:41.098 "nbd_device": "/dev/nbd1", 00:08:41.098 "bdev_name": "Malloc1" 00:08:41.098 } 00:08:41.098 ]' 00:08:41.098 13:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.098 13:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.098 /dev/nbd1' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.356 /dev/nbd1' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:41.356 256+0 records in 00:08:41.356 256+0 records out 00:08:41.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108187 s, 96.9 MB/s 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.356 256+0 records in 00:08:41.356 256+0 records out 00:08:41.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166935 s, 62.8 MB/s 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:41.356 256+0 records in 00:08:41.356 256+0 records out 00:08:41.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286869 s, 36.6 MB/s 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.356 13:36:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.357 13:36:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.614 13:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:41.872 13:36:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.130 13:36:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.130 13:36:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:42.130 13:36:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:42.389 [2024-06-11 13:36:35.233681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.647 [2024-06-11 13:36:35.310497] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.647 [2024-06-11 13:36:35.310502] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.647 [2024-06-11 13:36:35.354177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:42.647 [2024-06-11 13:36:35.354225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:45.174 13:36:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1237783 /var/tmp/spdk-nbd.sock 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1237783 ']' 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:45.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:45.174 13:36:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:08:45.432 13:36:38 event.app_repeat -- event/event.sh@39 -- # killprocess 1237783 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1237783 ']' 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1237783 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1237783 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1237783' 00:08:45.432 killing process with pid 1237783 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1237783 00:08:45.432 13:36:38 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1237783 00:08:45.725 spdk_app_start is called in Round 0. 00:08:45.725 Shutdown signal received, stop current app iteration 00:08:45.725 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:08:45.725 spdk_app_start is called in Round 1. 00:08:45.725 Shutdown signal received, stop current app iteration 00:08:45.725 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:08:45.725 spdk_app_start is called in Round 2. 00:08:45.725 Shutdown signal received, stop current app iteration 00:08:45.725 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:08:45.725 spdk_app_start is called in Round 3. 00:08:45.725 Shutdown signal received, stop current app iteration 00:08:45.725 13:36:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:45.725 13:36:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:45.725 00:08:45.725 real 0m17.998s 00:08:45.725 user 0m38.973s 00:08:45.725 sys 0m3.553s 00:08:45.725 13:36:38 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:45.725 13:36:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:45.725 ************************************ 00:08:45.725 END TEST app_repeat 00:08:45.725 ************************************ 00:08:45.725 13:36:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:45.725 13:36:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:45.725 13:36:38 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:45.725 13:36:38 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:45.725 13:36:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.725 ************************************ 00:08:45.725 START TEST cpu_locks 00:08:45.725 ************************************ 00:08:45.725 13:36:38 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:45.984 * Looking for test storage... 00:08:45.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:45.984 13:36:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:45.984 13:36:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:45.984 13:36:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:45.984 13:36:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:45.984 13:36:38 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:45.984 13:36:38 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:45.984 13:36:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:45.984 ************************************ 00:08:45.984 START TEST default_locks 00:08:45.984 ************************************ 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1241025 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1241025 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1241025 ']' 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:45.984 13:36:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:45.984 [2024-06-11 13:36:38.766898] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:45.984 [2024-06-11 13:36:38.766963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241025 ] 00:08:45.984 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.984 [2024-06-11 13:36:38.868713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.243 [2024-06-11 13:36:38.956664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.810 13:36:39 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:46.810 13:36:39 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:08:46.810 13:36:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1241025 00:08:46.810 13:36:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1241025 00:08:46.810 13:36:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:47.376 lslocks: write error 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1241025 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1241025 ']' 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1241025 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241025 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241025' 00:08:47.376 killing process with pid 1241025 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1241025 00:08:47.376 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1241025 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1241025 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1241025 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1241025 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1241025 ']' 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.634 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1241025) - No such process 00:08:47.635 ERROR: process (pid: 1241025) is no longer running 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:47.635 00:08:47.635 real 0m1.781s 00:08:47.635 user 0m1.905s 00:08:47.635 sys 0m0.653s 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:47.635 13:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.635 ************************************ 00:08:47.635 END TEST default_locks 00:08:47.635 ************************************ 00:08:47.635 13:36:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:47.635 13:36:40 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:47.635 13:36:40 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:47.635 13:36:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:47.894 ************************************ 00:08:47.894 START TEST default_locks_via_rpc 00:08:47.894 ************************************ 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1241452 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1241452 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1241452 ']' 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:47.894 13:36:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.894 [2024-06-11 13:36:40.635539] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:47.894 [2024-06-11 13:36:40.635600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241452 ] 00:08:47.894 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.894 [2024-06-11 13:36:40.740012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.153 [2024-06-11 13:36:40.825746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.720 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1241452 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1241452 00:08:48.721 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1241452 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1241452 ']' 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1241452 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:49.289 13:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241452 00:08:49.289 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:49.289 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:49.289 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241452' 00:08:49.289 killing process with pid 1241452 00:08:49.289 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1241452 00:08:49.289 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1241452 00:08:49.548 00:08:49.548 real 0m1.761s 00:08:49.548 user 0m1.885s 00:08:49.548 sys 0m0.646s 00:08:49.548 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:49.548 13:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.548 ************************************ 00:08:49.548 END TEST default_locks_via_rpc 00:08:49.548 ************************************ 00:08:49.548 13:36:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:49.548 13:36:42 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:49.548 13:36:42 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:49.548 13:36:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.548 ************************************ 00:08:49.548 START TEST non_locking_app_on_locked_coremask 00:08:49.548 ************************************ 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1241864 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1241864 /var/tmp/spdk.sock 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1241864 ']' 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:49.548 13:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:49.806 [2024-06-11 13:36:42.472080] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:49.806 [2024-06-11 13:36:42.472136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241864 ] 00:08:49.806 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.806 [2024-06-11 13:36:42.572636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.806 [2024-06-11 13:36:42.659895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1241887 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1241887 /var/tmp/spdk2.sock 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1241887 ']' 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:50.777 13:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.777 [2024-06-11 13:36:43.428745] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:50.777 [2024-06-11 13:36:43.428814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241887 ] 00:08:50.777 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.777 [2024-06-11 13:36:43.562364] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:50.777 [2024-06-11 13:36:43.562397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.037 [2024-06-11 13:36:43.736263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.604 13:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:51.604 13:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:08:51.604 13:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1241864 00:08:51.604 13:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1241864 00:08:51.604 13:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:52.540 lslocks: write error 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1241864 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1241864 ']' 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1241864 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241864 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241864' 00:08:52.540 killing process with pid 1241864 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1241864 00:08:52.540 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1241864 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1241887 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1241887 ']' 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1241887 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1241887 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1241887' 00:08:53.109 killing process with pid 1241887 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1241887 00:08:53.109 13:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1241887 00:08:53.677 00:08:53.677 real 0m3.898s 00:08:53.677 user 0m4.260s 00:08:53.677 sys 0m1.340s 00:08:53.677 13:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:53.677 13:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.677 ************************************ 00:08:53.677 END TEST non_locking_app_on_locked_coremask 00:08:53.677 ************************************ 00:08:53.677 13:36:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:53.677 13:36:46 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:53.677 13:36:46 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:53.677 13:36:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.677 ************************************ 00:08:53.677 START TEST locking_app_on_unlocked_coremask 00:08:53.677 ************************************ 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1242454 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1242454 /var/tmp/spdk.sock 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1242454 ']' 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:53.677 13:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.677 [2024-06-11 13:36:46.455670] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:53.677 [2024-06-11 13:36:46.455737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242454 ] 00:08:53.677 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.677 [2024-06-11 13:36:46.555545] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:53.677 [2024-06-11 13:36:46.555576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.936 [2024-06-11 13:36:46.634489] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1242717 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1242717 /var/tmp/spdk2.sock 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1242717 ']' 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:54.504 13:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.504 [2024-06-11 13:36:47.407807] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:54.504 [2024-06-11 13:36:47.407873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242717 ] 00:08:54.763 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.763 [2024-06-11 13:36:47.545486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.022 [2024-06-11 13:36:47.707882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.589 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:55.589 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:08:55.589 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1242717 00:08:55.589 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1242717 00:08:55.589 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:55.848 lslocks: write error 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1242454 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1242454 ']' 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1242454 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1242454 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1242454' 00:08:55.848 killing process with pid 1242454 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1242454 00:08:55.848 13:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1242454 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1242717 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1242717 ']' 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1242717 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1242717 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1242717' 00:08:56.786 killing process with pid 1242717 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1242717 00:08:56.786 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1242717 00:08:57.045 00:08:57.045 real 0m3.372s 00:08:57.045 user 0m3.670s 00:08:57.045 sys 0m1.068s 00:08:57.045 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:57.045 13:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.045 ************************************ 00:08:57.045 END TEST locking_app_on_unlocked_coremask 00:08:57.045 ************************************ 00:08:57.045 13:36:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:57.045 13:36:49 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:57.045 13:36:49 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:57.046 13:36:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.046 ************************************ 00:08:57.046 START TEST locking_app_on_locked_coremask 00:08:57.046 ************************************ 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1243157 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1243157 /var/tmp/spdk.sock 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1243157 ']' 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:57.046 13:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:57.046 [2024-06-11 13:36:49.911542] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:57.046 [2024-06-11 13:36:49.911605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243157 ] 00:08:57.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.305 [2024-06-11 13:36:50.014230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.305 [2024-06-11 13:36:50.107891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1243290 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1243290 /var/tmp/spdk2.sock 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1243290 /var/tmp/spdk2.sock 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1243290 /var/tmp/spdk2.sock 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1243290 ']' 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:58.243 13:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:58.243 [2024-06-11 13:36:50.868999] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:58.243 [2024-06-11 13:36:50.869063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243290 ] 00:08:58.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.243 [2024-06-11 13:36:51.003833] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1243157 has claimed it. 00:08:58.243 [2024-06-11 13:36:51.003884] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:58.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1243290) - No such process 00:08:58.812 ERROR: process (pid: 1243290) is no longer running 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1243157 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1243157 00:08:58.812 13:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.380 lslocks: write error 00:08:59.380 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1243157 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1243157 ']' 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1243157 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1243157 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1243157' 00:08:59.381 killing process with pid 1243157 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1243157 00:08:59.381 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1243157 00:08:59.948 00:08:59.948 real 0m2.751s 00:08:59.948 user 0m3.084s 00:08:59.948 sys 0m0.921s 00:08:59.948 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:59.948 13:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:59.948 ************************************ 00:08:59.948 END TEST locking_app_on_locked_coremask 00:08:59.949 ************************************ 00:08:59.949 13:36:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:59.949 13:36:52 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:59.949 13:36:52 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:59.949 13:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.949 ************************************ 00:08:59.949 START TEST locking_overlapped_coremask 00:08:59.949 ************************************ 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1243606 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1243606 /var/tmp/spdk.sock 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1243606 ']' 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:59.949 13:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:59.949 [2024-06-11 13:36:52.749446] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:59.949 [2024-06-11 13:36:52.749513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243606 ] 00:08:59.949 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.949 [2024-06-11 13:36:52.850309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.208 [2024-06-11 13:36:52.934799] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.208 [2024-06-11 13:36:52.934893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.208 [2024-06-11 13:36:52.934897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1243859 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1243859 /var/tmp/spdk2.sock 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1243859 /var/tmp/spdk2.sock 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1243859 /var/tmp/spdk2.sock 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1243859 ']' 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:00.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:00.776 13:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.035 [2024-06-11 13:36:53.702915] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:01.035 [2024-06-11 13:36:53.702982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243859 ] 00:09:01.035 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.035 [2024-06-11 13:36:53.813979] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1243606 has claimed it. 00:09:01.035 [2024-06-11 13:36:53.814021] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:01.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1243859) - No such process 00:09:01.603 ERROR: process (pid: 1243859) is no longer running 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:01.603 13:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1243606 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1243606 ']' 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1243606 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1243606 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1243606' 00:09:01.604 killing process with pid 1243606 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1243606 00:09:01.604 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1243606 00:09:01.863 00:09:01.863 real 0m2.035s 00:09:01.863 user 0m5.653s 00:09:01.863 sys 0m0.513s 00:09:01.863 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:01.863 13:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.863 ************************************ 00:09:01.863 END TEST locking_overlapped_coremask 00:09:01.863 ************************************ 00:09:01.863 13:36:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:01.863 13:36:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:01.863 13:36:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:01.863 13:36:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.122 ************************************ 00:09:02.122 START TEST locking_overlapped_coremask_via_rpc 00:09:02.122 ************************************ 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1244151 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1244151 /var/tmp/spdk.sock 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1244151 ']' 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:02.122 13:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.122 [2024-06-11 13:36:54.853670] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:02.122 [2024-06-11 13:36:54.853728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244151 ] 00:09:02.122 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.122 [2024-06-11 13:36:54.955011] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.122 [2024-06-11 13:36:54.955040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.382 [2024-06-11 13:36:55.044630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.382 [2024-06-11 13:36:55.044726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.382 [2024-06-11 13:36:55.044730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1244183 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1244183 /var/tmp/spdk2.sock 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1244183 ']' 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:02.950 13:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.950 [2024-06-11 13:36:55.827489] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:02.950 [2024-06-11 13:36:55.827553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244183 ] 00:09:03.210 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.210 [2024-06-11 13:36:55.939547] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:03.210 [2024-06-11 13:36:55.939576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.210 [2024-06-11 13:36:56.092500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.210 [2024-06-11 13:36:56.092578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.210 [2024-06-11 13:36:56.092579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 [2024-06-11 13:36:56.755558] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1244151 has claimed it. 00:09:04.148 request: 00:09:04.148 { 00:09:04.148 "method": "framework_enable_cpumask_locks", 00:09:04.148 "req_id": 1 00:09:04.148 } 00:09:04.148 Got JSON-RPC error response 00:09:04.148 response: 00:09:04.148 { 00:09:04.148 "code": -32603, 00:09:04.148 "message": "Failed to claim CPU core: 2" 00:09:04.148 } 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1244151 /var/tmp/spdk.sock 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1244151 ']' 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:04.148 13:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1244183 /var/tmp/spdk2.sock 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1244183 ']' 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:04.148 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:04.407 00:09:04.407 real 0m2.443s 00:09:04.407 user 0m1.157s 00:09:04.407 sys 0m0.216s 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:04.407 13:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.407 ************************************ 00:09:04.407 END TEST locking_overlapped_coremask_via_rpc 00:09:04.407 ************************************ 00:09:04.407 13:36:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:04.407 13:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1244151 ]] 00:09:04.407 13:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1244151 00:09:04.407 13:36:57 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1244151 ']' 00:09:04.407 13:36:57 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1244151 00:09:04.407 13:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:09:04.408 13:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:04.408 13:36:57 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1244151 00:09:04.667 13:36:57 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:04.667 13:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:04.667 13:36:57 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1244151' 00:09:04.667 killing process with pid 1244151 00:09:04.667 13:36:57 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1244151 00:09:04.667 13:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1244151 00:09:04.926 13:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1244183 ]] 00:09:04.926 13:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1244183 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1244183 ']' 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1244183 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1244183 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1244183' 00:09:04.926 killing process with pid 1244183 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1244183 00:09:04.926 13:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1244183 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1244151 ]] 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1244151 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1244151 ']' 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1244151 00:09:05.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1244151) - No such process 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1244151 is not found' 00:09:05.186 Process with pid 1244151 is not found 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1244183 ]] 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1244183 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1244183 ']' 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1244183 00:09:05.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1244183) - No such process 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1244183 is not found' 00:09:05.186 Process with pid 1244183 is not found 00:09:05.186 13:36:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:05.186 00:09:05.186 real 0m19.506s 00:09:05.186 user 0m33.668s 00:09:05.186 sys 0m6.437s 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.186 13:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.186 ************************************ 00:09:05.186 END TEST cpu_locks 00:09:05.186 ************************************ 00:09:05.445 00:09:05.445 real 0m47.008s 00:09:05.445 user 1m28.900s 00:09:05.445 sys 0m11.201s 00:09:05.445 13:36:58 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:05.445 13:36:58 event -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 ************************************ 00:09:05.445 END TEST event 00:09:05.445 ************************************ 00:09:05.445 13:36:58 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:05.445 13:36:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:05.445 13:36:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.445 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 ************************************ 00:09:05.445 START TEST thread 00:09:05.445 ************************************ 00:09:05.445 13:36:58 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:05.445 * Looking for test storage... 00:09:05.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:05.445 13:36:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:05.445 13:36:58 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:09:05.445 13:36:58 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:05.445 13:36:58 thread -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 ************************************ 00:09:05.445 START TEST thread_poller_perf 00:09:05.445 ************************************ 00:09:05.445 13:36:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:05.705 [2024-06-11 13:36:58.366843] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:05.705 [2024-06-11 13:36:58.366925] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244795 ] 00:09:05.705 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.705 [2024-06-11 13:36:58.470526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.705 [2024-06-11 13:36:58.552006] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.705 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:07.083 ====================================== 00:09:07.083 busy:2514482404 (cyc) 00:09:07.083 total_run_count: 290000 00:09:07.083 tsc_hz: 2500000000 (cyc) 00:09:07.083 ====================================== 00:09:07.083 poller_cost: 8670 (cyc), 3468 (nsec) 00:09:07.083 00:09:07.083 real 0m1.296s 00:09:07.083 user 0m1.182s 00:09:07.083 sys 0m0.109s 00:09:07.083 13:36:59 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:07.083 13:36:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:07.083 ************************************ 00:09:07.083 END TEST thread_poller_perf 00:09:07.083 ************************************ 00:09:07.083 13:36:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:07.083 13:36:59 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:09:07.083 13:36:59 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:07.083 13:36:59 thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.083 ************************************ 00:09:07.083 START TEST thread_poller_perf 00:09:07.083 ************************************ 00:09:07.083 13:36:59 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:07.083 [2024-06-11 13:36:59.736180] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:07.083 [2024-06-11 13:36:59.736245] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245075 ] 00:09:07.083 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.083 [2024-06-11 13:36:59.837390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.083 [2024-06-11 13:36:59.918274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.083 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:08.522 ====================================== 00:09:08.522 busy:2502550282 (cyc) 00:09:08.522 total_run_count: 3821000 00:09:08.522 tsc_hz: 2500000000 (cyc) 00:09:08.522 ====================================== 00:09:08.522 poller_cost: 654 (cyc), 261 (nsec) 00:09:08.522 00:09:08.522 real 0m1.284s 00:09:08.522 user 0m1.169s 00:09:08.522 sys 0m0.109s 00:09:08.522 13:37:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:08.522 13:37:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 END TEST thread_poller_perf 00:09:08.522 ************************************ 00:09:08.522 13:37:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:08.522 00:09:08.522 real 0m2.835s 00:09:08.522 user 0m2.443s 00:09:08.522 sys 0m0.402s 00:09:08.522 13:37:01 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:08.522 13:37:01 thread -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 END TEST thread 00:09:08.522 ************************************ 00:09:08.522 13:37:01 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:08.522 13:37:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:08.522 13:37:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:08.522 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 START TEST accel 00:09:08.522 ************************************ 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:08.522 * Looking for test storage... 00:09:08.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:08.522 13:37:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:08.522 13:37:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:08.522 13:37:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:08.522 13:37:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1245407 00:09:08.522 13:37:01 accel -- accel/accel.sh@63 -- # waitforlisten 1245407 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@830 -- # '[' -z 1245407 ']' 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.522 13:37:01 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:08.522 13:37:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:08.522 13:37:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:08.522 13:37:01 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 13:37:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:08.522 13:37:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.522 13:37:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.522 13:37:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:08.522 13:37:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:08.522 13:37:01 accel -- accel/accel.sh@41 -- # jq -r . 00:09:08.522 [2024-06-11 13:37:01.274648] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:08.522 [2024-06-11 13:37:01.274713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245407 ] 00:09:08.522 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.522 [2024-06-11 13:37:01.376249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.810 [2024-06-11 13:37:01.460720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@863 -- # return 0 00:09:09.377 13:37:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:09.377 13:37:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:09.377 13:37:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:09.377 13:37:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:09.377 13:37:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:09.377 13:37:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.377 13:37:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # IFS== 00:09:09.377 13:37:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:09.377 13:37:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:09.377 13:37:02 accel -- accel/accel.sh@75 -- # killprocess 1245407 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@949 -- # '[' -z 1245407 ']' 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@953 -- # kill -0 1245407 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@954 -- # uname 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1245407 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1245407' 00:09:09.377 killing process with pid 1245407 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@968 -- # kill 1245407 00:09:09.377 13:37:02 accel -- common/autotest_common.sh@973 -- # wait 1245407 00:09:09.945 13:37:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:09.945 13:37:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 13:37:02 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:09.945 13:37:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:09.945 13:37:02 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:09.945 13:37:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 13:37:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:09.945 13:37:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.945 ************************************ 00:09:09.945 START TEST accel_missing_filename 00:09:09.945 ************************************ 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:09.945 13:37:02 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:09.945 13:37:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:09.945 [2024-06-11 13:37:02.743184] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:09.945 [2024-06-11 13:37:02.743243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245707 ] 00:09:09.945 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.945 [2024-06-11 13:37:02.844233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.205 [2024-06-11 13:37:02.932052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.205 [2024-06-11 13:37:02.976394] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.205 [2024-06-11 13:37:03.038788] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:10.205 A filename is required. 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:10.464 00:09:10.464 real 0m0.401s 00:09:10.464 user 0m0.277s 00:09:10.464 sys 0m0.161s 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:10.464 13:37:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:10.464 ************************************ 00:09:10.464 END TEST accel_missing_filename 00:09:10.464 ************************************ 00:09:10.464 13:37:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:10.464 13:37:03 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:09:10.464 13:37:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:10.464 13:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.464 ************************************ 00:09:10.464 START TEST accel_compress_verify 00:09:10.464 ************************************ 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.464 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:10.464 13:37:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:10.464 13:37:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:10.464 13:37:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.464 13:37:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.464 13:37:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.465 13:37:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.465 13:37:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.465 13:37:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:10.465 13:37:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:10.465 [2024-06-11 13:37:03.216043] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:10.465 [2024-06-11 13:37:03.216123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245738 ] 00:09:10.465 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.465 [2024-06-11 13:37:03.317646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.724 [2024-06-11 13:37:03.403016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.724 [2024-06-11 13:37:03.446986] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.724 [2024-06-11 13:37:03.508367] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:10.724 00:09:10.724 Compression does not support the verify option, aborting. 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:10.724 00:09:10.724 real 0m0.399s 00:09:10.724 user 0m0.289s 00:09:10.724 sys 0m0.150s 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:10.724 13:37:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:10.724 ************************************ 00:09:10.724 END TEST accel_compress_verify 00:09:10.724 ************************************ 00:09:10.724 13:37:03 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:10.724 13:37:03 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:10.724 13:37:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:10.724 13:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 ************************************ 00:09:10.984 START TEST accel_wrong_workload 00:09:10.984 ************************************ 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:10.984 13:37:03 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:10.984 Unsupported workload type: foobar 00:09:10.984 [2024-06-11 13:37:03.697256] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:10.984 accel_perf options: 00:09:10.984 [-h help message] 00:09:10.984 [-q queue depth per core] 00:09:10.984 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:10.984 [-T number of threads per core 00:09:10.984 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:10.984 [-t time in seconds] 00:09:10.984 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:10.984 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:10.984 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:10.984 [-l for compress/decompress workloads, name of uncompressed input file 00:09:10.984 [-S for crc32c workload, use this seed value (default 0) 00:09:10.984 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:10.984 [-f for fill workload, use this BYTE value (default 255) 00:09:10.984 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:10.984 [-y verify result if this switch is on] 00:09:10.984 [-a tasks to allocate per core (default: same value as -q)] 00:09:10.984 Can be used to spread operations across a wider range of memory. 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:10.984 00:09:10.984 real 0m0.036s 00:09:10.984 user 0m0.018s 00:09:10.984 sys 0m0.018s 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:10.984 13:37:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 ************************************ 00:09:10.984 END TEST accel_wrong_workload 00:09:10.984 ************************************ 00:09:10.984 Error: writing output failed: Broken pipe 00:09:10.984 13:37:03 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:10.984 13:37:03 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:09:10.984 13:37:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:10.984 13:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.984 ************************************ 00:09:10.984 START TEST accel_negative_buffers 00:09:10.984 ************************************ 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:10.984 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:09:10.984 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:10.985 13:37:03 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:10.985 -x option must be non-negative. 00:09:10.985 [2024-06-11 13:37:03.817345] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:10.985 accel_perf options: 00:09:10.985 [-h help message] 00:09:10.985 [-q queue depth per core] 00:09:10.985 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:10.985 [-T number of threads per core 00:09:10.985 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:10.985 [-t time in seconds] 00:09:10.985 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:10.985 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:10.985 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:10.985 [-l for compress/decompress workloads, name of uncompressed input file 00:09:10.985 [-S for crc32c workload, use this seed value (default 0) 00:09:10.985 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:10.985 [-f for fill workload, use this BYTE value (default 255) 00:09:10.985 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:10.985 [-y verify result if this switch is on] 00:09:10.985 [-a tasks to allocate per core (default: same value as -q)] 00:09:10.985 Can be used to spread operations across a wider range of memory. 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:10.985 00:09:10.985 real 0m0.035s 00:09:10.985 user 0m0.021s 00:09:10.985 sys 0m0.014s 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:10.985 13:37:03 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:10.985 ************************************ 00:09:10.985 END TEST accel_negative_buffers 00:09:10.985 ************************************ 00:09:10.985 Error: writing output failed: Broken pipe 00:09:10.985 13:37:03 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:10.985 13:37:03 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:10.985 13:37:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:10.985 13:37:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:11.245 ************************************ 00:09:11.245 START TEST accel_crc32c 00:09:11.245 ************************************ 00:09:11.245 13:37:03 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:11.245 13:37:03 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:11.245 [2024-06-11 13:37:03.923292] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:11.245 [2024-06-11 13:37:03.923360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246048 ] 00:09:11.245 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.245 [2024-06-11 13:37:04.026025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.245 [2024-06-11 13:37:04.108124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.245 13:37:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:11.505 13:37:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:12.442 13:37:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:12.442 00:09:12.442 real 0m1.398s 00:09:12.442 user 0m0.008s 00:09:12.442 sys 0m0.000s 00:09:12.442 13:37:05 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:12.442 13:37:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:12.442 ************************************ 00:09:12.442 END TEST accel_crc32c 00:09:12.442 ************************************ 00:09:12.442 13:37:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:12.442 13:37:05 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:12.442 13:37:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:12.442 13:37:05 accel -- common/autotest_common.sh@10 -- # set +x 00:09:12.701 ************************************ 00:09:12.701 START TEST accel_crc32c_C2 00:09:12.701 ************************************ 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:12.701 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:12.701 [2024-06-11 13:37:05.399064] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:12.701 [2024-06-11 13:37:05.399135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246329 ] 00:09:12.701 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.701 [2024-06-11 13:37:05.501416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.701 [2024-06-11 13:37:05.582702] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:12.960 13:37:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.895 00:09:13.895 real 0m1.402s 00:09:13.895 user 0m0.008s 00:09:13.895 sys 0m0.000s 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.895 13:37:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:13.895 ************************************ 00:09:13.895 END TEST accel_crc32c_C2 00:09:13.895 ************************************ 00:09:14.153 13:37:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:14.153 13:37:06 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:14.153 13:37:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:14.153 13:37:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:14.153 ************************************ 00:09:14.153 START TEST accel_copy 00:09:14.153 ************************************ 00:09:14.153 13:37:06 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:14.153 13:37:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:14.153 [2024-06-11 13:37:06.868472] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:14.153 [2024-06-11 13:37:06.868535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246571 ] 00:09:14.154 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.154 [2024-06-11 13:37:06.969788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.154 [2024-06-11 13:37:07.051321] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:14.412 13:37:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:15.347 13:37:08 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.347 00:09:15.347 real 0m1.394s 00:09:15.347 user 0m0.005s 00:09:15.347 sys 0m0.003s 00:09:15.347 13:37:08 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.347 13:37:08 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.347 ************************************ 00:09:15.347 END TEST accel_copy 00:09:15.347 ************************************ 00:09:15.606 13:37:08 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:15.606 13:37:08 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:15.606 13:37:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.606 13:37:08 accel -- common/autotest_common.sh@10 -- # set +x 00:09:15.606 ************************************ 00:09:15.606 START TEST accel_fill 00:09:15.606 ************************************ 00:09:15.606 13:37:08 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:15.606 13:37:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:15.606 [2024-06-11 13:37:08.342795] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:15.606 [2024-06-11 13:37:08.342870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246816 ] 00:09:15.606 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.606 [2024-06-11 13:37:08.444805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.865 [2024-06-11 13:37:08.527392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:15.865 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:15.866 13:37:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.243 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:17.244 13:37:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:17.244 00:09:17.244 real 0m1.403s 00:09:17.244 user 0m0.004s 00:09:17.244 sys 0m0.004s 00:09:17.244 13:37:09 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:17.244 13:37:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:17.244 ************************************ 00:09:17.244 END TEST accel_fill 00:09:17.244 ************************************ 00:09:17.244 13:37:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:17.244 13:37:09 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:17.244 13:37:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:17.244 13:37:09 accel -- common/autotest_common.sh@10 -- # set +x 00:09:17.244 ************************************ 00:09:17.244 START TEST accel_copy_crc32c 00:09:17.244 ************************************ 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:17.244 13:37:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:17.244 [2024-06-11 13:37:09.816125] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:17.244 [2024-06-11 13:37:09.816186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247062 ] 00:09:17.244 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.244 [2024-06-11 13:37:09.919560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.244 [2024-06-11 13:37:10.002589] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:17.244 13:37:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:18.622 00:09:18.622 real 0m1.402s 00:09:18.622 user 0m0.006s 00:09:18.622 sys 0m0.002s 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:18.622 13:37:11 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:18.622 ************************************ 00:09:18.622 END TEST accel_copy_crc32c 00:09:18.622 ************************************ 00:09:18.622 13:37:11 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:18.622 13:37:11 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:18.622 13:37:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:18.622 13:37:11 accel -- common/autotest_common.sh@10 -- # set +x 00:09:18.622 ************************************ 00:09:18.622 START TEST accel_copy_crc32c_C2 00:09:18.622 ************************************ 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:18.622 [2024-06-11 13:37:11.293110] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:18.622 [2024-06-11 13:37:11.293175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247294 ] 00:09:18.622 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.622 [2024-06-11 13:37:11.397749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.622 [2024-06-11 13:37:11.479604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.622 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.623 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:18.881 13:37:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:19.818 00:09:19.818 real 0m1.404s 00:09:19.818 user 0m1.259s 00:09:19.818 sys 0m0.149s 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:19.818 13:37:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:19.818 ************************************ 00:09:19.818 END TEST accel_copy_crc32c_C2 00:09:19.818 ************************************ 00:09:19.818 13:37:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:19.818 13:37:12 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:19.818 13:37:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:19.818 13:37:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.077 ************************************ 00:09:20.077 START TEST accel_dualcast 00:09:20.077 ************************************ 00:09:20.077 13:37:12 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:20.077 [2024-06-11 13:37:12.753226] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:20.077 [2024-06-11 13:37:12.753268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247547 ] 00:09:20.077 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.077 [2024-06-11 13:37:12.844189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.077 [2024-06-11 13:37:12.927016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.077 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.078 13:37:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:21.454 13:37:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:21.454 00:09:21.454 real 0m1.376s 00:09:21.454 user 0m1.239s 00:09:21.454 sys 0m0.141s 00:09:21.454 13:37:14 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:21.454 13:37:14 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:21.454 ************************************ 00:09:21.454 END TEST accel_dualcast 00:09:21.454 ************************************ 00:09:21.454 13:37:14 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:21.454 13:37:14 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:21.454 13:37:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.454 13:37:14 accel -- common/autotest_common.sh@10 -- # set +x 00:09:21.454 ************************************ 00:09:21.454 START TEST accel_compare 00:09:21.454 ************************************ 00:09:21.454 13:37:14 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:21.454 13:37:14 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:21.454 [2024-06-11 13:37:14.220451] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:21.454 [2024-06-11 13:37:14.220521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247796 ] 00:09:21.454 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.454 [2024-06-11 13:37:14.322188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.823 [2024-06-11 13:37:14.412099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:21.823 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:21.824 13:37:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:22.759 13:37:15 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:22.759 00:09:22.759 real 0m1.411s 00:09:22.759 user 0m1.260s 00:09:22.759 sys 0m0.156s 00:09:22.759 13:37:15 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.759 13:37:15 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:22.759 ************************************ 00:09:22.759 END TEST accel_compare 00:09:22.759 ************************************ 00:09:22.759 13:37:15 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:22.759 13:37:15 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:09:22.759 13:37:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.759 13:37:15 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.018 ************************************ 00:09:23.018 START TEST accel_xor 00:09:23.018 ************************************ 00:09:23.018 13:37:15 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:23.018 [2024-06-11 13:37:15.701064] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:23.018 [2024-06-11 13:37:15.701120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248081 ] 00:09:23.018 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.018 [2024-06-11 13:37:15.801518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.018 [2024-06-11 13:37:15.882922] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.018 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:23.276 13:37:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:24.212 13:37:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:24.212 00:09:24.212 real 0m1.398s 00:09:24.212 user 0m1.245s 00:09:24.212 sys 0m0.158s 00:09:24.212 13:37:17 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.212 13:37:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:24.212 ************************************ 00:09:24.212 END TEST accel_xor 00:09:24.212 ************************************ 00:09:24.212 13:37:17 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:24.212 13:37:17 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:24.212 13:37:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:24.212 13:37:17 accel -- common/autotest_common.sh@10 -- # set +x 00:09:24.471 ************************************ 00:09:24.471 START TEST accel_xor 00:09:24.471 ************************************ 00:09:24.471 13:37:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:24.471 13:37:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:24.471 [2024-06-11 13:37:17.169225] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:24.471 [2024-06-11 13:37:17.169281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248366 ] 00:09:24.471 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.471 [2024-06-11 13:37:17.268392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.471 [2024-06-11 13:37:17.349830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:24.730 13:37:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:25.664 13:37:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:25.664 00:09:25.664 real 0m1.398s 00:09:25.664 user 0m1.253s 00:09:25.665 sys 0m0.149s 00:09:25.665 13:37:18 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:25.665 13:37:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:25.665 ************************************ 00:09:25.665 END TEST accel_xor 00:09:25.665 ************************************ 00:09:25.923 13:37:18 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:25.923 13:37:18 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:09:25.923 13:37:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:25.923 13:37:18 accel -- common/autotest_common.sh@10 -- # set +x 00:09:25.923 ************************************ 00:09:25.923 START TEST accel_dif_verify 00:09:25.923 ************************************ 00:09:25.923 13:37:18 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:25.923 13:37:18 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:25.923 [2024-06-11 13:37:18.646508] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:25.923 [2024-06-11 13:37:18.646564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248656 ] 00:09:25.923 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.923 [2024-06-11 13:37:18.746320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.923 [2024-06-11 13:37:18.827312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.182 13:37:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:27.116 13:37:20 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:27.116 00:09:27.116 real 0m1.399s 00:09:27.116 user 0m1.250s 00:09:27.116 sys 0m0.154s 00:09:27.116 13:37:20 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:27.116 13:37:20 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:27.116 ************************************ 00:09:27.116 END TEST accel_dif_verify 00:09:27.116 ************************************ 00:09:27.374 13:37:20 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:27.374 13:37:20 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:09:27.374 13:37:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:27.374 13:37:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:27.374 ************************************ 00:09:27.374 START TEST accel_dif_generate 00:09:27.374 ************************************ 00:09:27.374 13:37:20 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:27.374 13:37:20 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:27.374 [2024-06-11 13:37:20.113162] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:27.374 [2024-06-11 13:37:20.113214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248935 ] 00:09:27.374 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.374 [2024-06-11 13:37:20.213464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.633 [2024-06-11 13:37:20.295800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:27.633 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:27.634 13:37:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:29.009 13:37:21 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.009 00:09:29.009 real 0m1.395s 00:09:29.009 user 0m1.256s 00:09:29.009 sys 0m0.144s 00:09:29.009 13:37:21 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:29.009 13:37:21 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:29.009 ************************************ 00:09:29.009 END TEST accel_dif_generate 00:09:29.009 ************************************ 00:09:29.009 13:37:21 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:29.009 13:37:21 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:09:29.010 13:37:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:29.010 13:37:21 accel -- common/autotest_common.sh@10 -- # set +x 00:09:29.010 ************************************ 00:09:29.010 START TEST accel_dif_generate_copy 00:09:29.010 ************************************ 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:29.010 [2024-06-11 13:37:21.582448] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:29.010 [2024-06-11 13:37:21.582507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249221 ] 00:09:29.010 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.010 [2024-06-11 13:37:21.682248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.010 [2024-06-11 13:37:21.767281] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.010 13:37:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:30.386 00:09:30.386 real 0m1.401s 00:09:30.386 user 0m1.249s 00:09:30.386 sys 0m0.156s 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:30.386 13:37:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.386 ************************************ 00:09:30.386 END TEST accel_dif_generate_copy 00:09:30.386 ************************************ 00:09:30.386 13:37:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:30.386 13:37:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.386 13:37:22 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:09:30.386 13:37:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:30.386 13:37:22 accel -- common/autotest_common.sh@10 -- # set +x 00:09:30.386 ************************************ 00:09:30.386 START TEST accel_comp 00:09:30.386 ************************************ 00:09:30.386 13:37:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:30.386 [2024-06-11 13:37:23.055210] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:30.386 [2024-06-11 13:37:23.055284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249507 ] 00:09:30.386 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.386 [2024-06-11 13:37:23.157660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.386 [2024-06-11 13:37:23.238847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:30.386 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:30.387 13:37:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:31.763 13:37:24 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:31.763 00:09:31.763 real 0m1.405s 00:09:31.763 user 0m1.254s 00:09:31.763 sys 0m0.156s 00:09:31.763 13:37:24 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:31.763 13:37:24 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:31.763 ************************************ 00:09:31.763 END TEST accel_comp 00:09:31.763 ************************************ 00:09:31.763 13:37:24 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:31.763 13:37:24 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:09:31.763 13:37:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:31.763 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:09:31.763 ************************************ 00:09:31.763 START TEST accel_decomp 00:09:31.763 ************************************ 00:09:31.763 13:37:24 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:31.763 13:37:24 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:31.763 [2024-06-11 13:37:24.527001] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:31.763 [2024-06-11 13:37:24.527058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249788 ] 00:09:31.763 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.763 [2024-06-11 13:37:24.626091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.023 [2024-06-11 13:37:24.708012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:32.023 13:37:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:33.400 13:37:25 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:33.400 00:09:33.400 real 0m1.399s 00:09:33.400 user 0m1.249s 00:09:33.400 sys 0m0.155s 00:09:33.400 13:37:25 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:33.400 13:37:25 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:33.400 ************************************ 00:09:33.400 END TEST accel_decomp 00:09:33.400 ************************************ 00:09:33.400 13:37:25 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:33.400 13:37:25 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:09:33.400 13:37:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:33.400 13:37:25 accel -- common/autotest_common.sh@10 -- # set +x 00:09:33.400 ************************************ 00:09:33.400 START TEST accel_decomp_full 00:09:33.400 ************************************ 00:09:33.400 13:37:25 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:33.400 13:37:25 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:33.400 13:37:25 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:33.400 13:37:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.400 13:37:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.400 13:37:25 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:33.401 13:37:25 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:33.401 [2024-06-11 13:37:26.000775] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:33.401 [2024-06-11 13:37:26.000830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250073 ] 00:09:33.401 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.401 [2024-06-11 13:37:26.102595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.401 [2024-06-11 13:37:26.183996] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:33.401 13:37:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:34.778 13:37:27 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:34.778 00:09:34.778 real 0m1.415s 00:09:34.778 user 0m1.259s 00:09:34.778 sys 0m0.161s 00:09:34.778 13:37:27 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:34.778 13:37:27 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:34.778 ************************************ 00:09:34.778 END TEST accel_decomp_full 00:09:34.778 ************************************ 00:09:34.778 13:37:27 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:34.778 13:37:27 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:09:34.778 13:37:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:34.778 13:37:27 accel -- common/autotest_common.sh@10 -- # set +x 00:09:34.778 ************************************ 00:09:34.778 START TEST accel_decomp_mcore 00:09:34.778 ************************************ 00:09:34.778 13:37:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:34.778 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:34.779 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:34.779 [2024-06-11 13:37:27.468648] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:34.779 [2024-06-11 13:37:27.468690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250357 ] 00:09:34.779 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.779 [2024-06-11 13:37:27.559377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.779 [2024-06-11 13:37:27.648531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.779 [2024-06-11 13:37:27.648624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.779 [2024-06-11 13:37:27.648721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.779 [2024-06-11 13:37:27.648724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.038 13:37:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:35.974 00:09:35.974 real 0m1.396s 00:09:35.974 user 0m4.611s 00:09:35.974 sys 0m0.145s 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:35.974 13:37:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:35.974 ************************************ 00:09:35.974 END TEST accel_decomp_mcore 00:09:35.974 ************************************ 00:09:36.234 13:37:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:36.234 13:37:28 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:36.234 13:37:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:36.234 13:37:28 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.234 ************************************ 00:09:36.234 START TEST accel_decomp_full_mcore 00:09:36.234 ************************************ 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:36.234 13:37:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:36.234 [2024-06-11 13:37:28.964050] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:36.234 [2024-06-11 13:37:28.964108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250649 ] 00:09:36.234 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.234 [2024-06-11 13:37:29.066108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.493 [2024-06-11 13:37:29.154520] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.493 [2024-06-11 13:37:29.154614] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.493 [2024-06-11 13:37:29.154727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.493 [2024-06-11 13:37:29.154728] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:36.493 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:36.494 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:36.494 13:37:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:37.871 00:09:37.871 real 0m1.440s 00:09:37.871 user 0m4.662s 00:09:37.871 sys 0m0.177s 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:37.871 13:37:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:37.871 ************************************ 00:09:37.871 END TEST accel_decomp_full_mcore 00:09:37.871 ************************************ 00:09:37.871 13:37:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:37.871 13:37:30 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:09:37.871 13:37:30 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:37.871 13:37:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:37.871 ************************************ 00:09:37.871 START TEST accel_decomp_mthread 00:09:37.871 ************************************ 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:37.871 [2024-06-11 13:37:30.482548] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:37.871 [2024-06-11 13:37:30.482621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250933 ] 00:09:37.871 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.871 [2024-06-11 13:37:30.584339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.871 [2024-06-11 13:37:30.668362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.871 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:37.872 13:37:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.249 00:09:39.249 real 0m1.417s 00:09:39.249 user 0m1.278s 00:09:39.249 sys 0m0.154s 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:39.249 13:37:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:39.249 ************************************ 00:09:39.249 END TEST accel_decomp_mthread 00:09:39.249 ************************************ 00:09:39.249 13:37:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:39.249 13:37:31 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:09:39.249 13:37:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:39.249 13:37:31 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.249 ************************************ 00:09:39.249 START TEST accel_decomp_full_mthread 00:09:39.249 ************************************ 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:39.249 13:37:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:39.249 [2024-06-11 13:37:31.983736] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:39.249 [2024-06-11 13:37:31.983801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251218 ] 00:09:39.249 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.249 [2024-06-11 13:37:32.087866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.511 [2024-06-11 13:37:32.169070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.511 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:39.512 13:37:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:40.891 00:09:40.891 real 0m1.443s 00:09:40.891 user 0m1.287s 00:09:40.891 sys 0m0.169s 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:40.891 13:37:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:40.891 ************************************ 00:09:40.891 END TEST accel_decomp_full_mthread 00:09:40.891 ************************************ 00:09:40.891 13:37:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:40.891 13:37:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:40.891 13:37:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:40.891 13:37:33 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:40.891 13:37:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:40.891 13:37:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:40.891 13:37:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:40.891 13:37:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:40.891 13:37:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:40.891 13:37:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:40.891 13:37:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:40.891 13:37:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:40.891 13:37:33 accel -- accel/accel.sh@41 -- # jq -r . 00:09:40.891 ************************************ 00:09:40.891 START TEST accel_dif_functional_tests 00:09:40.891 ************************************ 00:09:40.891 13:37:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:40.891 [2024-06-11 13:37:33.529823] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:40.891 [2024-06-11 13:37:33.529883] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251505 ] 00:09:40.891 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.891 [2024-06-11 13:37:33.633003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.891 [2024-06-11 13:37:33.717615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.891 [2024-06-11 13:37:33.717708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.891 [2024-06-11 13:37:33.717710] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.891 00:09:40.891 00:09:40.891 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.891 http://cunit.sourceforge.net/ 00:09:40.891 00:09:40.891 00:09:40.891 Suite: accel_dif 00:09:40.891 Test: verify: DIF generated, GUARD check ...passed 00:09:40.891 Test: verify: DIF generated, APPTAG check ...passed 00:09:40.891 Test: verify: DIF generated, REFTAG check ...passed 00:09:40.891 Test: verify: DIF not generated, GUARD check ...[2024-06-11 13:37:33.791102] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:40.891 passed 00:09:40.891 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 13:37:33.791166] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:40.891 passed 00:09:40.891 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 13:37:33.791200] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:40.891 passed 00:09:40.891 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:40.891 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 13:37:33.791260] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:40.891 passed 00:09:40.891 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:40.891 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:40.891 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:40.891 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 13:37:33.791401] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:40.891 passed 00:09:40.891 Test: verify copy: DIF generated, GUARD check ...passed 00:09:40.891 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:40.891 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:40.891 Test: verify copy: DIF not generated, GUARD check ...[2024-06-11 13:37:33.791550] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:40.891 passed 00:09:40.891 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-11 13:37:33.791583] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:40.891 passed 00:09:40.891 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-11 13:37:33.791612] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:40.891 passed 00:09:40.891 Test: generate copy: DIF generated, GUARD check ...passed 00:09:40.891 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:40.891 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:40.891 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:40.891 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:40.891 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:40.891 Test: generate copy: iovecs-len validate ...[2024-06-11 13:37:33.791845] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:40.891 passed 00:09:40.891 Test: generate copy: buffer alignment validate ...passed 00:09:40.891 00:09:40.891 Run Summary: Type Total Ran Passed Failed Inactive 00:09:40.891 suites 1 1 n/a 0 0 00:09:40.891 tests 26 26 26 0 0 00:09:40.891 asserts 115 115 115 0 n/a 00:09:40.891 00:09:40.891 Elapsed time = 0.002 seconds 00:09:41.151 00:09:41.151 real 0m0.494s 00:09:41.151 user 0m0.682s 00:09:41.151 sys 0m0.189s 00:09:41.151 13:37:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.151 13:37:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:41.151 ************************************ 00:09:41.151 END TEST accel_dif_functional_tests 00:09:41.151 ************************************ 00:09:41.151 00:09:41.151 real 0m32.896s 00:09:41.151 user 0m35.416s 00:09:41.151 sys 0m5.544s 00:09:41.151 13:37:34 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:41.151 13:37:34 accel -- common/autotest_common.sh@10 -- # set +x 00:09:41.151 ************************************ 00:09:41.151 END TEST accel 00:09:41.151 ************************************ 00:09:41.410 13:37:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:41.410 13:37:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:41.410 13:37:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:41.410 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:09:41.410 ************************************ 00:09:41.410 START TEST accel_rpc 00:09:41.410 ************************************ 00:09:41.410 13:37:34 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:41.410 * Looking for test storage... 00:09:41.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:41.410 13:37:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:41.410 13:37:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1251576 00:09:41.410 13:37:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1251576 00:09:41.410 13:37:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:41.410 13:37:34 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1251576 ']' 00:09:41.411 13:37:34 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.411 13:37:34 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:41.411 13:37:34 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.411 13:37:34 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:41.411 13:37:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.411 [2024-06-11 13:37:34.276994] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:41.411 [2024-06-11 13:37:34.277059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251576 ] 00:09:41.411 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.668 [2024-06-11 13:37:34.370276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.668 [2024-06-11 13:37:34.456807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.604 13:37:35 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:42.604 13:37:35 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:09:42.604 13:37:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:42.604 13:37:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:42.604 13:37:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:42.604 13:37:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:42.604 13:37:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:42.604 13:37:35 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:42.604 13:37:35 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 ************************************ 00:09:42.604 START TEST accel_assign_opcode 00:09:42.604 ************************************ 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 [2024-06-11 13:37:35.227178] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 [2024-06-11 13:37:35.235189] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.604 software 00:09:42.604 00:09:42.604 real 0m0.258s 00:09:42.604 user 0m0.050s 00:09:42.604 sys 0m0.013s 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:42.604 13:37:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:42.604 ************************************ 00:09:42.604 END TEST accel_assign_opcode 00:09:42.604 ************************************ 00:09:42.863 13:37:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1251576 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1251576 ']' 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1251576 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1251576 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1251576' 00:09:42.863 killing process with pid 1251576 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@968 -- # kill 1251576 00:09:42.863 13:37:35 accel_rpc -- common/autotest_common.sh@973 -- # wait 1251576 00:09:43.122 00:09:43.122 real 0m1.797s 00:09:43.122 user 0m1.871s 00:09:43.122 sys 0m0.566s 00:09:43.122 13:37:35 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.122 13:37:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.122 ************************************ 00:09:43.122 END TEST accel_rpc 00:09:43.122 ************************************ 00:09:43.122 13:37:35 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:43.122 13:37:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:43.122 13:37:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:43.122 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:09:43.122 ************************************ 00:09:43.122 START TEST app_cmdline 00:09:43.122 ************************************ 00:09:43.122 13:37:35 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:43.381 * Looking for test storage... 00:09:43.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:43.381 13:37:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:43.381 13:37:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1251968 00:09:43.381 13:37:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1251968 00:09:43.381 13:37:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1251968 ']' 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:43.381 13:37:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:43.381 [2024-06-11 13:37:36.146111] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:43.381 [2024-06-11 13:37:36.146182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251968 ] 00:09:43.381 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.381 [2024-06-11 13:37:36.247306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.640 [2024-06-11 13:37:36.334645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.207 13:37:37 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:44.207 13:37:37 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:09:44.207 13:37:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:44.465 { 00:09:44.465 "version": "SPDK v24.09-pre git sha1 9ccef4907", 00:09:44.465 "fields": { 00:09:44.465 "major": 24, 00:09:44.465 "minor": 9, 00:09:44.465 "patch": 0, 00:09:44.465 "suffix": "-pre", 00:09:44.465 "commit": "9ccef4907" 00:09:44.465 } 00:09:44.465 } 00:09:44.465 13:37:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:44.465 13:37:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:44.465 13:37:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:44.465 13:37:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:44.465 13:37:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:44.465 13:37:37 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:44.466 13:37:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:44.466 13:37:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:44.466 13:37:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:44.466 13:37:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:44.466 13:37:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:44.466 13:37:37 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:44.725 request: 00:09:44.725 { 00:09:44.725 "method": "env_dpdk_get_mem_stats", 00:09:44.725 "req_id": 1 00:09:44.725 } 00:09:44.725 Got JSON-RPC error response 00:09:44.725 response: 00:09:44.725 { 00:09:44.725 "code": -32601, 00:09:44.725 "message": "Method not found" 00:09:44.725 } 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:44.725 13:37:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1251968 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1251968 ']' 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1251968 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1251968 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1251968' 00:09:44.725 killing process with pid 1251968 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@968 -- # kill 1251968 00:09:44.725 13:37:37 app_cmdline -- common/autotest_common.sh@973 -- # wait 1251968 00:09:45.294 00:09:45.294 real 0m1.939s 00:09:45.294 user 0m2.352s 00:09:45.294 sys 0m0.558s 00:09:45.294 13:37:37 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:45.294 13:37:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:45.294 ************************************ 00:09:45.294 END TEST app_cmdline 00:09:45.294 ************************************ 00:09:45.294 13:37:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:45.294 13:37:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:09:45.294 13:37:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:45.294 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:09:45.294 ************************************ 00:09:45.294 START TEST version 00:09:45.294 ************************************ 00:09:45.294 13:37:37 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:45.294 * Looking for test storage... 00:09:45.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:45.294 13:37:38 version -- app/version.sh@17 -- # get_header_version major 00:09:45.294 13:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # cut -f2 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.294 13:37:38 version -- app/version.sh@17 -- # major=24 00:09:45.294 13:37:38 version -- app/version.sh@18 -- # get_header_version minor 00:09:45.294 13:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # cut -f2 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.294 13:37:38 version -- app/version.sh@18 -- # minor=9 00:09:45.294 13:37:38 version -- app/version.sh@19 -- # get_header_version patch 00:09:45.294 13:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # cut -f2 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.294 13:37:38 version -- app/version.sh@19 -- # patch=0 00:09:45.294 13:37:38 version -- app/version.sh@20 -- # get_header_version suffix 00:09:45.294 13:37:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # cut -f2 00:09:45.294 13:37:38 version -- app/version.sh@14 -- # tr -d '"' 00:09:45.294 13:37:38 version -- app/version.sh@20 -- # suffix=-pre 00:09:45.294 13:37:38 version -- app/version.sh@22 -- # version=24.9 00:09:45.294 13:37:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:45.294 13:37:38 version -- app/version.sh@28 -- # version=24.9rc0 00:09:45.294 13:37:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.294 13:37:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:45.294 13:37:38 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:45.294 13:37:38 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:45.294 00:09:45.294 real 0m0.196s 00:09:45.294 user 0m0.101s 00:09:45.294 sys 0m0.146s 00:09:45.294 13:37:38 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:45.294 13:37:38 version -- common/autotest_common.sh@10 -- # set +x 00:09:45.294 ************************************ 00:09:45.294 END TEST version 00:09:45.294 ************************************ 00:09:45.555 13:37:38 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@198 -- # uname -s 00:09:45.555 13:37:38 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:45.555 13:37:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:45.555 13:37:38 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:45.555 13:37:38 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:45.555 13:37:38 -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:45.555 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:09:45.555 13:37:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:09:45.555 13:37:38 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:09:45.555 13:37:38 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:45.555 13:37:38 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:45.555 13:37:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:45.555 13:37:38 -- common/autotest_common.sh@10 -- # set +x 00:09:45.555 ************************************ 00:09:45.555 START TEST nvmf_tcp 00:09:45.555 ************************************ 00:09:45.555 13:37:38 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:45.555 * Looking for test storage... 00:09:45.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.555 13:37:38 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.555 13:37:38 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.555 13:37:38 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.555 13:37:38 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 13:37:38 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 13:37:38 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 13:37:38 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:09:45.555 13:37:38 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:45.555 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:45.555 13:37:38 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:45.555 13:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.816 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:45.816 13:37:38 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.816 13:37:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:45.816 13:37:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:45.816 13:37:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.816 ************************************ 00:09:45.816 START TEST nvmf_example 00:09:45.816 ************************************ 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.816 * Looking for test storage... 00:09:45.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:45.816 13:37:38 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.817 13:37:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.464 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.464 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:52.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:52.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:52.465 Found net devices under 0000:af:00.0: cvl_0_0 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:52.465 Found net devices under 0000:af:00.1: cvl_0_1 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.465 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.725 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:09:52.725 00:09:52.725 --- 10.0.0.2 ping statistics --- 00:09:52.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.725 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:52.984 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:09:52.984 00:09:52.984 --- 10.0.0.1 ping statistics --- 00:09:52.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.984 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1255801 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1255801 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1255801 ']' 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:52.985 13:37:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.985 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:53.921 13:37:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:53.921 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.159 Initializing NVMe Controllers 00:10:06.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:06.159 Initialization complete. Launching workers. 00:10:06.159 ======================================================== 00:10:06.159 Latency(us) 00:10:06.159 Device Information : IOPS MiB/s Average min max 00:10:06.159 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15992.63 62.47 4001.43 668.57 15327.67 00:10:06.159 ======================================================== 00:10:06.159 Total : 15992.63 62.47 4001.43 668.57 15327.67 00:10:06.159 00:10:06.159 13:37:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:06.159 13:37:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:06.159 13:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.159 13:37:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.159 rmmod nvme_tcp 00:10:06.159 rmmod nvme_fabrics 00:10:06.159 rmmod nvme_keyring 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1255801 ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1255801 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1255801 ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1255801 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1255801 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1255801' 00:10:06.159 killing process with pid 1255801 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 1255801 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 1255801 00:10:06.159 nvmf threads initialize successfully 00:10:06.159 bdev subsystem init successfully 00:10:06.159 created a nvmf target service 00:10:06.159 create targets's poll groups done 00:10:06.159 all subsystems of target started 00:10:06.159 nvmf target is running 00:10:06.159 all subsystems of target stopped 00:10:06.159 destroy targets's poll groups done 00:10:06.159 destroyed the nvmf target service 00:10:06.159 bdev subsystem finish successfully 00:10:06.159 nvmf threads destroy successfully 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.159 13:37:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.728 00:10:06.728 real 0m20.929s 00:10:06.728 user 0m45.999s 00:10:06.728 sys 0m7.499s 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:06.728 13:37:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.728 ************************************ 00:10:06.728 END TEST nvmf_example 00:10:06.728 ************************************ 00:10:06.728 13:37:59 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.728 13:37:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:06.728 13:37:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:06.728 13:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.728 ************************************ 00:10:06.728 START TEST nvmf_filesystem 00:10:06.728 ************************************ 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:06.728 * Looking for test storage... 00:10:06.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:06.728 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:06.990 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:06.990 #define SPDK_CONFIG_H 00:10:06.990 #define SPDK_CONFIG_APPS 1 00:10:06.990 #define SPDK_CONFIG_ARCH native 00:10:06.990 #undef SPDK_CONFIG_ASAN 00:10:06.990 #undef SPDK_CONFIG_AVAHI 00:10:06.990 #undef SPDK_CONFIG_CET 00:10:06.990 #define SPDK_CONFIG_COVERAGE 1 00:10:06.990 #define SPDK_CONFIG_CROSS_PREFIX 00:10:06.990 #undef SPDK_CONFIG_CRYPTO 00:10:06.990 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:06.990 #undef SPDK_CONFIG_CUSTOMOCF 00:10:06.990 #undef SPDK_CONFIG_DAOS 00:10:06.990 #define SPDK_CONFIG_DAOS_DIR 00:10:06.990 #define SPDK_CONFIG_DEBUG 1 00:10:06.990 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:06.990 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:06.991 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:06.991 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:06.991 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:06.991 #undef SPDK_CONFIG_DPDK_UADK 00:10:06.991 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:06.991 #define SPDK_CONFIG_EXAMPLES 1 00:10:06.991 #undef SPDK_CONFIG_FC 00:10:06.991 #define SPDK_CONFIG_FC_PATH 00:10:06.991 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:06.991 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:06.991 #undef SPDK_CONFIG_FUSE 00:10:06.991 #undef SPDK_CONFIG_FUZZER 00:10:06.991 #define SPDK_CONFIG_FUZZER_LIB 00:10:06.991 #undef SPDK_CONFIG_GOLANG 00:10:06.991 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:06.991 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:06.991 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:06.991 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:06.991 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:06.991 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:06.991 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:06.991 #define SPDK_CONFIG_IDXD 1 00:10:06.991 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:06.991 #undef SPDK_CONFIG_IPSEC_MB 00:10:06.991 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:06.991 #define SPDK_CONFIG_ISAL 1 00:10:06.991 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:06.991 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:06.991 #define SPDK_CONFIG_LIBDIR 00:10:06.991 #undef SPDK_CONFIG_LTO 00:10:06.991 #define SPDK_CONFIG_MAX_LCORES 00:10:06.991 #define SPDK_CONFIG_NVME_CUSE 1 00:10:06.991 #undef SPDK_CONFIG_OCF 00:10:06.991 #define SPDK_CONFIG_OCF_PATH 00:10:06.991 #define SPDK_CONFIG_OPENSSL_PATH 00:10:06.991 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:06.991 #define SPDK_CONFIG_PGO_DIR 00:10:06.991 #undef SPDK_CONFIG_PGO_USE 00:10:06.991 #define SPDK_CONFIG_PREFIX /usr/local 00:10:06.991 #undef SPDK_CONFIG_RAID5F 00:10:06.991 #undef SPDK_CONFIG_RBD 00:10:06.991 #define SPDK_CONFIG_RDMA 1 00:10:06.991 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:06.991 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:06.991 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:06.991 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:06.991 #define SPDK_CONFIG_SHARED 1 00:10:06.991 #undef SPDK_CONFIG_SMA 00:10:06.991 #define SPDK_CONFIG_TESTS 1 00:10:06.991 #undef SPDK_CONFIG_TSAN 00:10:06.991 #define SPDK_CONFIG_UBLK 1 00:10:06.991 #define SPDK_CONFIG_UBSAN 1 00:10:06.991 #undef SPDK_CONFIG_UNIT_TESTS 00:10:06.991 #undef SPDK_CONFIG_URING 00:10:06.991 #define SPDK_CONFIG_URING_PATH 00:10:06.991 #undef SPDK_CONFIG_URING_ZNS 00:10:06.991 #undef SPDK_CONFIG_USDT 00:10:06.991 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:06.991 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:06.991 #undef SPDK_CONFIG_VFIO_USER 00:10:06.991 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:06.991 #define SPDK_CONFIG_VHOST 1 00:10:06.991 #define SPDK_CONFIG_VIRTIO 1 00:10:06.991 #undef SPDK_CONFIG_VTUNE 00:10:06.991 #define SPDK_CONFIG_VTUNE_DIR 00:10:06.991 #define SPDK_CONFIG_WERROR 1 00:10:06.991 #define SPDK_CONFIG_WPDK_DIR 00:10:06.991 #undef SPDK_CONFIG_XNVME 00:10:06.991 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:06.991 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:06.992 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1258224 ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1258224 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.WBdBpA 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.WBdBpA/tests/target /tmp/spdk.WBdBpA 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=957145088 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327284736 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55374258176 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742305280 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6368047104 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867775488 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339077120 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30870409216 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871154688 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=745472 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:06.993 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:06.994 * Looking for test storage... 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55374258176 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8582639616 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.994 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.995 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.995 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.995 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.995 13:37:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.995 13:37:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:15.113 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:15.113 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:15.113 Found net devices under 0000:af:00.0: cvl_0_0 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:15.113 Found net devices under 0000:af:00.1: cvl_0_1 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:15.113 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:15.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:10:15.114 00:10:15.114 --- 10.0.0.2 ping statistics --- 00:10:15.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.114 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:15.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:10:15.114 00:10:15.114 --- 10.0.0.1 ping statistics --- 00:10:15.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.114 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:15.114 13:38:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.114 ************************************ 00:10:15.114 START TEST nvmf_filesystem_no_in_capsule 00:10:15.114 ************************************ 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1261607 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1261607 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1261607 ']' 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:15.114 13:38:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.114 [2024-06-11 13:38:07.112486] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:10:15.114 [2024-06-11 13:38:07.112549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.114 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.114 [2024-06-11 13:38:07.223514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.114 [2024-06-11 13:38:07.312196] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.114 [2024-06-11 13:38:07.312242] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.114 [2024-06-11 13:38:07.312256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.114 [2024-06-11 13:38:07.312268] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.114 [2024-06-11 13:38:07.312278] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.114 [2024-06-11 13:38:07.314498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.114 [2024-06-11 13:38:07.314518] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.114 [2024-06-11 13:38:07.314585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.114 [2024-06-11 13:38:07.314584] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.371 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 [2024-06-11 13:38:08.081978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 [2024-06-11 13:38:08.235056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:10:15.372 { 00:10:15.372 "name": "Malloc1", 00:10:15.372 "aliases": [ 00:10:15.372 "538ae164-8054-4c44-89ec-ae246cf02667" 00:10:15.372 ], 00:10:15.372 "product_name": "Malloc disk", 00:10:15.372 "block_size": 512, 00:10:15.372 "num_blocks": 1048576, 00:10:15.372 "uuid": "538ae164-8054-4c44-89ec-ae246cf02667", 00:10:15.372 "assigned_rate_limits": { 00:10:15.372 "rw_ios_per_sec": 0, 00:10:15.372 "rw_mbytes_per_sec": 0, 00:10:15.372 "r_mbytes_per_sec": 0, 00:10:15.372 "w_mbytes_per_sec": 0 00:10:15.372 }, 00:10:15.372 "claimed": true, 00:10:15.372 "claim_type": "exclusive_write", 00:10:15.372 "zoned": false, 00:10:15.372 "supported_io_types": { 00:10:15.372 "read": true, 00:10:15.372 "write": true, 00:10:15.372 "unmap": true, 00:10:15.372 "write_zeroes": true, 00:10:15.372 "flush": true, 00:10:15.372 "reset": true, 00:10:15.372 "compare": false, 00:10:15.372 "compare_and_write": false, 00:10:15.372 "abort": true, 00:10:15.372 "nvme_admin": false, 00:10:15.372 "nvme_io": false 00:10:15.372 }, 00:10:15.372 "memory_domains": [ 00:10:15.372 { 00:10:15.372 "dma_device_id": "system", 00:10:15.372 "dma_device_type": 1 00:10:15.372 }, 00:10:15.372 { 00:10:15.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.372 "dma_device_type": 2 00:10:15.372 } 00:10:15.372 ], 00:10:15.372 "driver_specific": {} 00:10:15.372 } 00:10:15.372 ]' 00:10:15.372 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:10:15.629 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:15.630 13:38:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.001 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.001 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:10:17.001 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.001 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:17.001 13:38:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:18.937 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.195 13:38:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:19.451 13:38:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.821 ************************************ 00:10:20.821 START TEST filesystem_ext4 00:10:20.821 ************************************ 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:10:20.821 13:38:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:20.821 mke2fs 1.46.5 (30-Dec-2021) 00:10:20.821 Discarding device blocks: 0/522240 done 00:10:20.821 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:20.821 Filesystem UUID: c4099e06-342f-42f3-87ee-3f740816e6a7 00:10:20.821 Superblock backups stored on blocks: 00:10:20.821 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:20.821 00:10:20.821 Allocating group tables: 0/64 done 00:10:20.821 Writing inode tables: 0/64 done 00:10:20.821 Creating journal (8192 blocks): done 00:10:21.932 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:21.932 00:10:21.932 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:10:21.932 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1261607 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:22.196 00:10:22.196 real 0m1.566s 00:10:22.196 user 0m0.033s 00:10:22.196 sys 0m0.074s 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:22.196 ************************************ 00:10:22.196 END TEST filesystem_ext4 00:10:22.196 ************************************ 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:22.196 13:38:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.196 ************************************ 00:10:22.196 START TEST filesystem_btrfs 00:10:22.196 ************************************ 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:10:22.196 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:22.454 btrfs-progs v6.6.2 00:10:22.454 See https://btrfs.readthedocs.io for more information. 00:10:22.454 00:10:22.454 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:22.454 NOTE: several default settings have changed in version 5.15, please make sure 00:10:22.454 this does not affect your deployments: 00:10:22.454 - DUP for metadata (-m dup) 00:10:22.454 - enabled no-holes (-O no-holes) 00:10:22.454 - enabled free-space-tree (-R free-space-tree) 00:10:22.454 00:10:22.454 Label: (null) 00:10:22.454 UUID: e879795b-aa5c-452d-a179-18cdfd092516 00:10:22.454 Node size: 16384 00:10:22.454 Sector size: 4096 00:10:22.454 Filesystem size: 510.00MiB 00:10:22.454 Block group profiles: 00:10:22.454 Data: single 8.00MiB 00:10:22.454 Metadata: DUP 32.00MiB 00:10:22.454 System: DUP 8.00MiB 00:10:22.454 SSD detected: yes 00:10:22.454 Zoned device: no 00:10:22.454 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:22.454 Runtime features: free-space-tree 00:10:22.454 Checksum: crc32c 00:10:22.454 Number of devices: 1 00:10:22.454 Devices: 00:10:22.454 ID SIZE PATH 00:10:22.454 1 510.00MiB /dev/nvme0n1p1 00:10:22.454 00:10:22.454 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:10:22.454 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1261607 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:22.712 00:10:22.712 real 0m0.520s 00:10:22.712 user 0m0.033s 00:10:22.712 sys 0m0.139s 00:10:22.712 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:22.713 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:22.713 ************************************ 00:10:22.713 END TEST filesystem_btrfs 00:10:22.713 ************************************ 00:10:22.713 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:22.713 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:22.713 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:22.713 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.971 ************************************ 00:10:22.971 START TEST filesystem_xfs 00:10:22.971 ************************************ 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:10:22.971 13:38:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:22.971 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:22.971 = sectsz=512 attr=2, projid32bit=1 00:10:22.971 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:22.971 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:22.971 data = bsize=4096 blocks=130560, imaxpct=25 00:10:22.971 = sunit=0 swidth=0 blks 00:10:22.971 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:22.971 log =internal log bsize=4096 blocks=16384, version=2 00:10:22.971 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:22.971 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:23.905 Discarding blocks...Done. 00:10:23.905 13:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:10:23.905 13:38:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1261607 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.804 00:10:25.804 real 0m3.009s 00:10:25.804 user 0m0.031s 00:10:25.804 sys 0m0.082s 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:25.804 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:25.804 ************************************ 00:10:25.805 END TEST filesystem_xfs 00:10:25.805 ************************************ 00:10:25.805 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:26.062 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:26.062 13:38:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:10:26.321 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1261607 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1261607 ']' 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1261607 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1261607 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1261607' 00:10:26.322 killing process with pid 1261607 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1261607 00:10:26.322 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1261607 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:26.889 00:10:26.889 real 0m12.489s 00:10:26.889 user 0m48.563s 00:10:26.889 sys 0m1.793s 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.889 ************************************ 00:10:26.889 END TEST nvmf_filesystem_no_in_capsule 00:10:26.889 ************************************ 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:26.889 ************************************ 00:10:26.889 START TEST nvmf_filesystem_in_capsule 00:10:26.889 ************************************ 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1263956 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1263956 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1263956 ']' 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:26.889 13:38:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.889 [2024-06-11 13:38:19.689313] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:10:26.889 [2024-06-11 13:38:19.689375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.889 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.148 [2024-06-11 13:38:19.800373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.148 [2024-06-11 13:38:19.879377] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.148 [2024-06-11 13:38:19.879426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.148 [2024-06-11 13:38:19.879439] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.148 [2024-06-11 13:38:19.879451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.148 [2024-06-11 13:38:19.879462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.148 [2024-06-11 13:38:19.879526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.148 [2024-06-11 13:38:19.879625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.148 [2024-06-11 13:38:19.879675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.148 [2024-06-11 13:38:19.879675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.715 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:27.715 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:10:27.715 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.715 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:27.715 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 [2024-06-11 13:38:20.654749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.974 [2024-06-11 13:38:20.807587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:10:27.974 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:10:27.975 { 00:10:27.975 "name": "Malloc1", 00:10:27.975 "aliases": [ 00:10:27.975 "86d94da1-0b46-4c32-80cf-4193ca10aa8d" 00:10:27.975 ], 00:10:27.975 "product_name": "Malloc disk", 00:10:27.975 "block_size": 512, 00:10:27.975 "num_blocks": 1048576, 00:10:27.975 "uuid": "86d94da1-0b46-4c32-80cf-4193ca10aa8d", 00:10:27.975 "assigned_rate_limits": { 00:10:27.975 "rw_ios_per_sec": 0, 00:10:27.975 "rw_mbytes_per_sec": 0, 00:10:27.975 "r_mbytes_per_sec": 0, 00:10:27.975 "w_mbytes_per_sec": 0 00:10:27.975 }, 00:10:27.975 "claimed": true, 00:10:27.975 "claim_type": "exclusive_write", 00:10:27.975 "zoned": false, 00:10:27.975 "supported_io_types": { 00:10:27.975 "read": true, 00:10:27.975 "write": true, 00:10:27.975 "unmap": true, 00:10:27.975 "write_zeroes": true, 00:10:27.975 "flush": true, 00:10:27.975 "reset": true, 00:10:27.975 "compare": false, 00:10:27.975 "compare_and_write": false, 00:10:27.975 "abort": true, 00:10:27.975 "nvme_admin": false, 00:10:27.975 "nvme_io": false 00:10:27.975 }, 00:10:27.975 "memory_domains": [ 00:10:27.975 { 00:10:27.975 "dma_device_id": "system", 00:10:27.975 "dma_device_type": 1 00:10:27.975 }, 00:10:27.975 { 00:10:27.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.975 "dma_device_type": 2 00:10:27.975 } 00:10:27.975 ], 00:10:27.975 "driver_specific": {} 00:10:27.975 } 00:10:27.975 ]' 00:10:27.975 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:28.233 13:38:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.607 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.607 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:10:29.607 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.607 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:10:29.607 13:38:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:31.503 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:31.762 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:32.019 13:38:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.392 ************************************ 00:10:33.392 START TEST filesystem_in_capsule_ext4 00:10:33.392 ************************************ 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:10:33.392 13:38:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:33.392 mke2fs 1.46.5 (30-Dec-2021) 00:10:33.392 Discarding device blocks: 0/522240 done 00:10:33.392 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:33.392 Filesystem UUID: f45f41c8-67e7-4a65-867d-18e63d511dcc 00:10:33.392 Superblock backups stored on blocks: 00:10:33.392 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:33.392 00:10:33.392 Allocating group tables: 0/64 done 00:10:33.392 Writing inode tables: 0/64 done 00:10:35.286 Creating journal (8192 blocks): done 00:10:36.221 Writing superblocks and filesystem accounting information: 0/64 done 00:10:36.221 00:10:36.221 13:38:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:10:36.221 13:38:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1263956 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.221 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.480 00:10:36.480 real 0m3.191s 00:10:36.480 user 0m0.025s 00:10:36.480 sys 0m0.083s 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:36.480 ************************************ 00:10:36.480 END TEST filesystem_in_capsule_ext4 00:10:36.480 ************************************ 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.480 ************************************ 00:10:36.480 START TEST filesystem_in_capsule_btrfs 00:10:36.480 ************************************ 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:10:36.480 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:36.739 btrfs-progs v6.6.2 00:10:36.739 See https://btrfs.readthedocs.io for more information. 00:10:36.739 00:10:36.739 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:36.739 NOTE: several default settings have changed in version 5.15, please make sure 00:10:36.739 this does not affect your deployments: 00:10:36.739 - DUP for metadata (-m dup) 00:10:36.739 - enabled no-holes (-O no-holes) 00:10:36.739 - enabled free-space-tree (-R free-space-tree) 00:10:36.739 00:10:36.739 Label: (null) 00:10:36.739 UUID: 2286920d-b371-4730-a0a6-4a694009134b 00:10:36.739 Node size: 16384 00:10:36.739 Sector size: 4096 00:10:36.739 Filesystem size: 510.00MiB 00:10:36.739 Block group profiles: 00:10:36.739 Data: single 8.00MiB 00:10:36.739 Metadata: DUP 32.00MiB 00:10:36.739 System: DUP 8.00MiB 00:10:36.739 SSD detected: yes 00:10:36.739 Zoned device: no 00:10:36.739 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:36.739 Runtime features: free-space-tree 00:10:36.739 Checksum: crc32c 00:10:36.739 Number of devices: 1 00:10:36.739 Devices: 00:10:36.739 ID SIZE PATH 00:10:36.739 1 510.00MiB /dev/nvme0n1p1 00:10:36.739 00:10:36.739 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:10:36.739 13:38:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1263956 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.675 00:10:37.675 real 0m1.226s 00:10:37.675 user 0m0.031s 00:10:37.675 sys 0m0.141s 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.675 ************************************ 00:10:37.675 END TEST filesystem_in_capsule_btrfs 00:10:37.675 ************************************ 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.675 ************************************ 00:10:37.675 START TEST filesystem_in_capsule_xfs 00:10:37.675 ************************************ 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:10:37.675 13:38:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:37.933 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:37.933 = sectsz=512 attr=2, projid32bit=1 00:10:37.933 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:37.934 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:37.934 data = bsize=4096 blocks=130560, imaxpct=25 00:10:37.934 = sunit=0 swidth=0 blks 00:10:37.934 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:37.934 log =internal log bsize=4096 blocks=16384, version=2 00:10:37.934 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:37.934 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:38.869 Discarding blocks...Done. 00:10:38.869 13:38:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:10:38.869 13:38:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.402 13:38:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1263956 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.402 00:10:41.402 real 0m3.538s 00:10:41.402 user 0m0.034s 00:10:41.402 sys 0m0.076s 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.402 ************************************ 00:10:41.402 END TEST filesystem_in_capsule_xfs 00:10:41.402 ************************************ 00:10:41.402 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:10:41.662 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1263956 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1263956 ']' 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1263956 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1263956 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1263956' 00:10:41.922 killing process with pid 1263956 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1263956 00:10:41.922 13:38:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1263956 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:42.182 00:10:42.182 real 0m15.403s 00:10:42.182 user 0m59.921s 00:10:42.182 sys 0m2.096s 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.182 ************************************ 00:10:42.182 END TEST nvmf_filesystem_in_capsule 00:10:42.182 ************************************ 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.182 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.182 rmmod nvme_tcp 00:10:42.440 rmmod nvme_fabrics 00:10:42.440 rmmod nvme_keyring 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.440 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.441 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.441 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.441 13:38:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.441 13:38:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.441 13:38:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.363 13:38:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.363 00:10:44.363 real 0m37.712s 00:10:44.363 user 1m50.597s 00:10:44.363 sys 0m9.616s 00:10:44.364 13:38:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:44.364 13:38:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:44.364 ************************************ 00:10:44.364 END TEST nvmf_filesystem 00:10:44.364 ************************************ 00:10:44.681 13:38:37 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:44.681 13:38:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:44.681 13:38:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:44.681 13:38:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.681 ************************************ 00:10:44.681 START TEST nvmf_target_discovery 00:10:44.681 ************************************ 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:44.681 * Looking for test storage... 00:10:44.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.681 13:38:37 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.682 13:38:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:51.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:51.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:51.253 Found net devices under 0000:af:00.0: cvl_0_0 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:51.253 Found net devices under 0000:af:00.1: cvl_0_1 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.253 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:10:51.513 00:10:51.513 --- 10.0.0.2 ping statistics --- 00:10:51.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.513 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:51.513 00:10:51.513 --- 10.0.0.1 ping statistics --- 00:10:51.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.513 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1270530 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1270530 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1270530 ']' 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:51.513 13:38:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.771 [2024-06-11 13:38:44.444863] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:10:51.771 [2024-06-11 13:38:44.444921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.771 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.771 [2024-06-11 13:38:44.552743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.771 [2024-06-11 13:38:44.644930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.771 [2024-06-11 13:38:44.644972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.771 [2024-06-11 13:38:44.644985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.771 [2024-06-11 13:38:44.644998] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.771 [2024-06-11 13:38:44.645008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.771 [2024-06-11 13:38:44.645067] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.771 [2024-06-11 13:38:44.645167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.771 [2024-06-11 13:38:44.645278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.771 [2024-06-11 13:38:44.645278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 [2024-06-11 13:38:45.413746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:52.707 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 Null1 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 [2024-06-11 13:38:45.466078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 Null2 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 Null3 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 Null4 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.708 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:10:52.967 00:10:52.967 Discovery Log Number of Records 6, Generation counter 6 00:10:52.967 =====Discovery Log Entry 0====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: current discovery subsystem 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4420 00:10:52.967 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: explicit discovery connections, duplicate discovery information 00:10:52.967 sectype: none 00:10:52.967 =====Discovery Log Entry 1====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: nvme subsystem 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4420 00:10:52.967 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: none 00:10:52.967 sectype: none 00:10:52.967 =====Discovery Log Entry 2====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: nvme subsystem 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4420 00:10:52.967 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: none 00:10:52.967 sectype: none 00:10:52.967 =====Discovery Log Entry 3====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: nvme subsystem 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4420 00:10:52.967 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: none 00:10:52.967 sectype: none 00:10:52.967 =====Discovery Log Entry 4====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: nvme subsystem 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4420 00:10:52.967 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: none 00:10:52.967 sectype: none 00:10:52.967 =====Discovery Log Entry 5====== 00:10:52.967 trtype: tcp 00:10:52.967 adrfam: ipv4 00:10:52.967 subtype: discovery subsystem referral 00:10:52.967 treq: not required 00:10:52.967 portid: 0 00:10:52.967 trsvcid: 4430 00:10:52.967 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:52.967 traddr: 10.0.0.2 00:10:52.967 eflags: none 00:10:52.967 sectype: none 00:10:52.967 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:52.967 Perform nvmf subsystem discovery via RPC 00:10:52.967 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:52.967 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.967 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.967 [ 00:10:52.967 { 00:10:52.967 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:52.967 "subtype": "Discovery", 00:10:52.967 "listen_addresses": [ 00:10:52.967 { 00:10:52.967 "trtype": "TCP", 00:10:52.967 "adrfam": "IPv4", 00:10:52.967 "traddr": "10.0.0.2", 00:10:52.967 "trsvcid": "4420" 00:10:52.967 } 00:10:52.967 ], 00:10:52.967 "allow_any_host": true, 00:10:52.967 "hosts": [] 00:10:52.967 }, 00:10:52.967 { 00:10:52.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.967 "subtype": "NVMe", 00:10:52.967 "listen_addresses": [ 00:10:52.967 { 00:10:52.967 "trtype": "TCP", 00:10:52.967 "adrfam": "IPv4", 00:10:52.967 "traddr": "10.0.0.2", 00:10:52.967 "trsvcid": "4420" 00:10:52.967 } 00:10:52.967 ], 00:10:52.967 "allow_any_host": true, 00:10:52.967 "hosts": [], 00:10:52.967 "serial_number": "SPDK00000000000001", 00:10:52.967 "model_number": "SPDK bdev Controller", 00:10:52.967 "max_namespaces": 32, 00:10:52.967 "min_cntlid": 1, 00:10:52.967 "max_cntlid": 65519, 00:10:52.967 "namespaces": [ 00:10:52.967 { 00:10:52.967 "nsid": 1, 00:10:52.967 "bdev_name": "Null1", 00:10:52.967 "name": "Null1", 00:10:52.967 "nguid": "043E106D14DC4CCBBB2144ADEB2ACD17", 00:10:52.967 "uuid": "043e106d-14dc-4ccb-bb21-44adeb2acd17" 00:10:52.967 } 00:10:52.967 ] 00:10:52.967 }, 00:10:52.967 { 00:10:52.967 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:52.967 "subtype": "NVMe", 00:10:52.967 "listen_addresses": [ 00:10:52.967 { 00:10:52.967 "trtype": "TCP", 00:10:52.967 "adrfam": "IPv4", 00:10:52.967 "traddr": "10.0.0.2", 00:10:52.967 "trsvcid": "4420" 00:10:52.967 } 00:10:52.967 ], 00:10:52.967 "allow_any_host": true, 00:10:52.967 "hosts": [], 00:10:52.967 "serial_number": "SPDK00000000000002", 00:10:52.967 "model_number": "SPDK bdev Controller", 00:10:52.967 "max_namespaces": 32, 00:10:52.967 "min_cntlid": 1, 00:10:52.967 "max_cntlid": 65519, 00:10:52.967 "namespaces": [ 00:10:52.968 { 00:10:52.968 "nsid": 1, 00:10:52.968 "bdev_name": "Null2", 00:10:52.968 "name": "Null2", 00:10:52.968 "nguid": "CBEC052FAE814FDEBC7BA5518A7F0181", 00:10:52.968 "uuid": "cbec052f-ae81-4fde-bc7b-a5518a7f0181" 00:10:52.968 } 00:10:52.968 ] 00:10:52.968 }, 00:10:52.968 { 00:10:52.968 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:52.968 "subtype": "NVMe", 00:10:52.968 "listen_addresses": [ 00:10:52.968 { 00:10:52.968 "trtype": "TCP", 00:10:52.968 "adrfam": "IPv4", 00:10:52.968 "traddr": "10.0.0.2", 00:10:52.968 "trsvcid": "4420" 00:10:52.968 } 00:10:52.968 ], 00:10:52.968 "allow_any_host": true, 00:10:52.968 "hosts": [], 00:10:52.968 "serial_number": "SPDK00000000000003", 00:10:52.968 "model_number": "SPDK bdev Controller", 00:10:52.968 "max_namespaces": 32, 00:10:52.968 "min_cntlid": 1, 00:10:52.968 "max_cntlid": 65519, 00:10:52.968 "namespaces": [ 00:10:52.968 { 00:10:52.968 "nsid": 1, 00:10:52.968 "bdev_name": "Null3", 00:10:52.968 "name": "Null3", 00:10:52.968 "nguid": "E116C3C13C894163BCED3890295F5A13", 00:10:52.968 "uuid": "e116c3c1-3c89-4163-bced-3890295f5a13" 00:10:52.968 } 00:10:52.968 ] 00:10:52.968 }, 00:10:52.968 { 00:10:52.968 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:52.968 "subtype": "NVMe", 00:10:52.968 "listen_addresses": [ 00:10:52.968 { 00:10:52.968 "trtype": "TCP", 00:10:52.968 "adrfam": "IPv4", 00:10:52.968 "traddr": "10.0.0.2", 00:10:52.968 "trsvcid": "4420" 00:10:52.968 } 00:10:52.968 ], 00:10:52.968 "allow_any_host": true, 00:10:52.968 "hosts": [], 00:10:52.968 "serial_number": "SPDK00000000000004", 00:10:52.968 "model_number": "SPDK bdev Controller", 00:10:52.968 "max_namespaces": 32, 00:10:52.968 "min_cntlid": 1, 00:10:52.968 "max_cntlid": 65519, 00:10:52.968 "namespaces": [ 00:10:52.968 { 00:10:52.968 "nsid": 1, 00:10:52.968 "bdev_name": "Null4", 00:10:52.968 "name": "Null4", 00:10:52.968 "nguid": "A75778E8B9934DB1981D7DB02D28A269", 00:10:52.968 "uuid": "a75778e8-b993-4db1-981d-7db02d28a269" 00:10:52.968 } 00:10:52.968 ] 00:10:52.968 } 00:10:52.968 ] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.968 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.227 13:38:45 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.228 13:38:45 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.228 rmmod nvme_tcp 00:10:53.228 rmmod nvme_fabrics 00:10:53.228 rmmod nvme_keyring 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1270530 ']' 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1270530 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1270530 ']' 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1270530 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1270530 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1270530' 00:10:53.228 killing process with pid 1270530 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1270530 00:10:53.228 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1270530 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.486 13:38:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.023 13:38:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.023 00:10:56.024 real 0m11.040s 00:10:56.024 user 0m8.573s 00:10:56.024 sys 0m5.812s 00:10:56.024 13:38:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:56.024 13:38:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.024 ************************************ 00:10:56.024 END TEST nvmf_target_discovery 00:10:56.024 ************************************ 00:10:56.024 13:38:48 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:56.024 13:38:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:56.024 13:38:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:56.024 13:38:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.024 ************************************ 00:10:56.024 START TEST nvmf_referrals 00:10:56.024 ************************************ 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:56.024 * Looking for test storage... 00:10:56.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:56.024 13:38:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:02.596 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:02.596 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:02.596 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:02.597 Found net devices under 0000:af:00.0: cvl_0_0 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:02.597 Found net devices under 0000:af:00.1: cvl_0_1 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.597 13:38:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:02.597 00:11:02.597 --- 10.0.0.2 ping statistics --- 00:11:02.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.597 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:11:02.597 00:11:02.597 --- 10.0.0.1 ping statistics --- 00:11:02.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.597 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1274459 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1274459 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1274459 ']' 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.597 13:38:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.597 [2024-06-11 13:38:55.350639] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:11:02.597 [2024-06-11 13:38:55.350700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.597 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.597 [2024-06-11 13:38:55.457447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.857 [2024-06-11 13:38:55.545502] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.857 [2024-06-11 13:38:55.545544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.857 [2024-06-11 13:38:55.545557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.857 [2024-06-11 13:38:55.545570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.857 [2024-06-11 13:38:55.545580] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.857 [2024-06-11 13:38:55.545632] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.857 [2024-06-11 13:38:55.545650] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.857 [2024-06-11 13:38:55.545786] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.857 [2024-06-11 13:38:55.545786] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.426 [2024-06-11 13:38:56.307995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.426 [2024-06-11 13:38:56.324200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.426 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:03.686 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:03.944 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.203 13:38:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.203 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.462 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.720 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.978 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:05.236 13:38:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.236 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.495 rmmod nvme_tcp 00:11:05.495 rmmod nvme_fabrics 00:11:05.495 rmmod nvme_keyring 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1274459 ']' 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1274459 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1274459 ']' 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1274459 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1274459 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1274459' 00:11:05.495 killing process with pid 1274459 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1274459 00:11:05.495 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1274459 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.754 13:38:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.287 13:39:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:08.287 00:11:08.287 real 0m12.153s 00:11:08.287 user 0m14.492s 00:11:08.287 sys 0m6.011s 00:11:08.287 13:39:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:08.287 13:39:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 ************************************ 00:11:08.287 END TEST nvmf_referrals 00:11:08.287 ************************************ 00:11:08.287 13:39:00 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:08.287 13:39:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:08.287 13:39:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:08.287 13:39:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 ************************************ 00:11:08.287 START TEST nvmf_connect_disconnect 00:11:08.287 ************************************ 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:08.287 * Looking for test storage... 00:11:08.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.287 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.288 13:39:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:14.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:14.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:14.853 Found net devices under 0000:af:00.0: cvl_0_0 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:14.853 Found net devices under 0000:af:00.1: cvl_0_1 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.853 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.854 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.143 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.143 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.143 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:11:15.143 00:11:15.143 --- 10.0.0.2 ping statistics --- 00:11:15.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.143 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:11:15.143 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:11:15.143 00:11:15.143 --- 10.0.0.1 ping statistics --- 00:11:15.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.143 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1279393 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1279393 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1279393 ']' 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:15.144 13:39:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.144 [2024-06-11 13:39:07.938836] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:11:15.144 [2024-06-11 13:39:07.938895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.144 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.144 [2024-06-11 13:39:08.044363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.401 [2024-06-11 13:39:08.132810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.401 [2024-06-11 13:39:08.132852] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.401 [2024-06-11 13:39:08.132866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.401 [2024-06-11 13:39:08.132878] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.402 [2024-06-11 13:39:08.132888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.402 [2024-06-11 13:39:08.132940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.402 [2024-06-11 13:39:08.133032] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.402 [2024-06-11 13:39:08.133142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.402 [2024-06-11 13:39:08.133143] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.967 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:15.967 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:11:15.967 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.967 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:15.967 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 [2024-06-11 13:39:08.911807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.226 [2024-06-11 13:39:08.967798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:16.226 13:39:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:18.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:08.673 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.674 rmmod nvme_tcp 00:15:08.674 rmmod nvme_fabrics 00:15:08.674 rmmod nvme_keyring 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1279393 ']' 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1279393 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1279393 ']' 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1279393 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:08.674 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1279393 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1279393' 00:15:08.933 killing process with pid 1279393 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1279393 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1279393 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.933 13:43:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.469 13:43:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:11.469 00:15:11.469 real 4m3.199s 00:15:11.469 user 15m8.006s 00:15:11.469 sys 0m43.125s 00:15:11.469 13:43:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:11.469 13:43:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:11.469 ************************************ 00:15:11.469 END TEST nvmf_connect_disconnect 00:15:11.469 ************************************ 00:15:11.469 13:43:03 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:11.469 13:43:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:11.469 13:43:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:11.469 13:43:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.469 ************************************ 00:15:11.469 START TEST nvmf_multitarget 00:15:11.469 ************************************ 00:15:11.469 13:43:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:11.469 * Looking for test storage... 00:15:11.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.469 13:43:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.470 13:43:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:18.046 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:18.046 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:18.046 Found net devices under 0000:af:00.0: cvl_0_0 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:18.046 Found net devices under 0000:af:00.1: cvl_0_1 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.046 13:43:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:15:18.306 00:15:18.306 --- 10.0.0.2 ping statistics --- 00:15:18.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.306 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:15:18.306 00:15:18.306 --- 10.0.0.1 ping statistics --- 00:15:18.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.306 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.306 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1324286 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1324286 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 1324286 ']' 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:18.566 13:43:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:18.566 [2024-06-11 13:43:11.294932] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:15:18.566 [2024-06-11 13:43:11.294990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.566 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.566 [2024-06-11 13:43:11.406714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.826 [2024-06-11 13:43:11.490557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.826 [2024-06-11 13:43:11.490601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.826 [2024-06-11 13:43:11.490614] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.826 [2024-06-11 13:43:11.490627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.826 [2024-06-11 13:43:11.490637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.826 [2024-06-11 13:43:11.490700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.826 [2024-06-11 13:43:11.490794] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.826 [2024-06-11 13:43:11.490912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.826 [2024-06-11 13:43:11.490912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:19.402 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:19.664 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:19.664 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:19.664 "nvmf_tgt_1" 00:15:19.664 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:19.923 "nvmf_tgt_2" 00:15:19.923 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:19.923 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:19.923 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:19.923 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:20.182 true 00:15:20.182 13:43:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:20.182 true 00:15:20.182 13:43:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:20.182 13:43:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.441 rmmod nvme_tcp 00:15:20.441 rmmod nvme_fabrics 00:15:20.441 rmmod nvme_keyring 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1324286 ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1324286 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 1324286 ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 1324286 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1324286 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1324286' 00:15:20.441 killing process with pid 1324286 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 1324286 00:15:20.441 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 1324286 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.701 13:43:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.608 13:43:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:22.608 00:15:22.608 real 0m11.552s 00:15:22.608 user 0m10.540s 00:15:22.608 sys 0m6.037s 00:15:22.608 13:43:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:22.608 13:43:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:22.608 ************************************ 00:15:22.608 END TEST nvmf_multitarget 00:15:22.608 ************************************ 00:15:22.868 13:43:15 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:22.869 13:43:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:22.869 13:43:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:22.869 13:43:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.869 ************************************ 00:15:22.869 START TEST nvmf_rpc 00:15:22.869 ************************************ 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:22.869 * Looking for test storage... 00:15:22.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.869 13:43:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:29.471 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:29.471 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:29.471 Found net devices under 0000:af:00.0: cvl_0_0 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:29.471 Found net devices under 0000:af:00.1: cvl_0_1 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.471 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:29.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:15:29.730 00:15:29.730 --- 10.0.0.2 ping statistics --- 00:15:29.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.730 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:15:29.730 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:15:29.990 00:15:29.990 --- 10.0.0.1 ping statistics --- 00:15:29.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.990 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1328413 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1328413 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 1328413 ']' 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:29.990 13:43:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 [2024-06-11 13:43:22.739297] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:15:29.990 [2024-06-11 13:43:22.739360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.990 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.990 [2024-06-11 13:43:22.847970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.249 [2024-06-11 13:43:22.931960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.249 [2024-06-11 13:43:22.932007] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.249 [2024-06-11 13:43:22.932020] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.249 [2024-06-11 13:43:22.932032] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.249 [2024-06-11 13:43:22.932042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.249 [2024-06-11 13:43:22.932110] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.249 [2024-06-11 13:43:22.932203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.249 [2024-06-11 13:43:22.932318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.249 [2024-06-11 13:43:22.932318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:30.818 "tick_rate": 2500000000, 00:15:30.818 "poll_groups": [ 00:15:30.818 { 00:15:30.818 "name": "nvmf_tgt_poll_group_000", 00:15:30.818 "admin_qpairs": 0, 00:15:30.818 "io_qpairs": 0, 00:15:30.818 "current_admin_qpairs": 0, 00:15:30.818 "current_io_qpairs": 0, 00:15:30.818 "pending_bdev_io": 0, 00:15:30.818 "completed_nvme_io": 0, 00:15:30.818 "transports": [] 00:15:30.818 }, 00:15:30.818 { 00:15:30.818 "name": "nvmf_tgt_poll_group_001", 00:15:30.818 "admin_qpairs": 0, 00:15:30.818 "io_qpairs": 0, 00:15:30.818 "current_admin_qpairs": 0, 00:15:30.818 "current_io_qpairs": 0, 00:15:30.818 "pending_bdev_io": 0, 00:15:30.818 "completed_nvme_io": 0, 00:15:30.818 "transports": [] 00:15:30.818 }, 00:15:30.818 { 00:15:30.818 "name": "nvmf_tgt_poll_group_002", 00:15:30.818 "admin_qpairs": 0, 00:15:30.818 "io_qpairs": 0, 00:15:30.818 "current_admin_qpairs": 0, 00:15:30.818 "current_io_qpairs": 0, 00:15:30.818 "pending_bdev_io": 0, 00:15:30.818 "completed_nvme_io": 0, 00:15:30.818 "transports": [] 00:15:30.818 }, 00:15:30.818 { 00:15:30.818 "name": "nvmf_tgt_poll_group_003", 00:15:30.818 "admin_qpairs": 0, 00:15:30.818 "io_qpairs": 0, 00:15:30.818 "current_admin_qpairs": 0, 00:15:30.818 "current_io_qpairs": 0, 00:15:30.818 "pending_bdev_io": 0, 00:15:30.818 "completed_nvme_io": 0, 00:15:30.818 "transports": [] 00:15:30.818 } 00:15:30.818 ] 00:15:30.818 }' 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:30.818 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.077 [2024-06-11 13:43:23.820491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.077 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:31.077 "tick_rate": 2500000000, 00:15:31.077 "poll_groups": [ 00:15:31.077 { 00:15:31.077 "name": "nvmf_tgt_poll_group_000", 00:15:31.077 "admin_qpairs": 0, 00:15:31.077 "io_qpairs": 0, 00:15:31.077 "current_admin_qpairs": 0, 00:15:31.077 "current_io_qpairs": 0, 00:15:31.077 "pending_bdev_io": 0, 00:15:31.077 "completed_nvme_io": 0, 00:15:31.077 "transports": [ 00:15:31.077 { 00:15:31.077 "trtype": "TCP" 00:15:31.077 } 00:15:31.077 ] 00:15:31.077 }, 00:15:31.077 { 00:15:31.077 "name": "nvmf_tgt_poll_group_001", 00:15:31.077 "admin_qpairs": 0, 00:15:31.077 "io_qpairs": 0, 00:15:31.077 "current_admin_qpairs": 0, 00:15:31.077 "current_io_qpairs": 0, 00:15:31.077 "pending_bdev_io": 0, 00:15:31.077 "completed_nvme_io": 0, 00:15:31.077 "transports": [ 00:15:31.077 { 00:15:31.077 "trtype": "TCP" 00:15:31.077 } 00:15:31.077 ] 00:15:31.077 }, 00:15:31.077 { 00:15:31.077 "name": "nvmf_tgt_poll_group_002", 00:15:31.077 "admin_qpairs": 0, 00:15:31.077 "io_qpairs": 0, 00:15:31.078 "current_admin_qpairs": 0, 00:15:31.078 "current_io_qpairs": 0, 00:15:31.078 "pending_bdev_io": 0, 00:15:31.078 "completed_nvme_io": 0, 00:15:31.078 "transports": [ 00:15:31.078 { 00:15:31.078 "trtype": "TCP" 00:15:31.078 } 00:15:31.078 ] 00:15:31.078 }, 00:15:31.078 { 00:15:31.078 "name": "nvmf_tgt_poll_group_003", 00:15:31.078 "admin_qpairs": 0, 00:15:31.078 "io_qpairs": 0, 00:15:31.078 "current_admin_qpairs": 0, 00:15:31.078 "current_io_qpairs": 0, 00:15:31.078 "pending_bdev_io": 0, 00:15:31.078 "completed_nvme_io": 0, 00:15:31.078 "transports": [ 00:15:31.078 { 00:15:31.078 "trtype": "TCP" 00:15:31.078 } 00:15:31.078 ] 00:15:31.078 } 00:15:31.078 ] 00:15:31.078 }' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.078 Malloc1 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.078 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.337 13:43:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:31.337 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.337 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 13:43:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.337 [2024-06-11 13:43:24.004964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:15:31.337 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:15:31.338 [2024-06-11 13:43:24.033571] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:15:31.338 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:31.338 could not add new controller: failed to write to nvme-fabrics device 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.338 13:43:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.714 13:43:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:32.714 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:32.714 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.714 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:32.714 13:43:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:15:34.620 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.879 [2024-06-11 13:43:27.540489] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:15:34.879 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:34.879 could not add new controller: failed to write to nvme-fabrics device 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.879 13:43:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:36.257 13:43:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.257 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:36.257 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.257 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:36.257 13:43:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:38.163 13:43:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.163 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.422 [2024-06-11 13:43:31.095472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:38.422 13:43:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.800 13:43:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:39.800 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:39.800 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.800 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:39.800 13:43:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:41.706 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.964 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.965 [2024-06-11 13:43:34.675098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.965 13:43:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:43.343 13:43:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:43.343 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:43.343 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.343 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:43.343 13:43:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:45.247 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 [2024-06-11 13:43:38.213568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.506 13:43:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.881 13:43:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.881 13:43:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:46.881 13:43:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.881 13:43:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:46.881 13:43:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:48.783 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 [2024-06-11 13:43:41.717783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.042 13:43:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:50.418 13:43:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:50.418 13:43:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:50.418 13:43:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.418 13:43:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:50.418 13:43:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:52.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 [2024-06-11 13:43:45.260515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:52.404 13:43:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.783 13:43:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.783 13:43:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:15:53.783 13:43:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.783 13:43:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:53.783 13:43:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 [2024-06-11 13:43:48.817612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.319 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 [2024-06-11 13:43:48.865740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 [2024-06-11 13:43:48.917897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 [2024-06-11 13:43:48.966099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 [2024-06-11 13:43:49.014282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.320 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:56.320 "tick_rate": 2500000000, 00:15:56.320 "poll_groups": [ 00:15:56.320 { 00:15:56.320 "name": "nvmf_tgt_poll_group_000", 00:15:56.320 "admin_qpairs": 2, 00:15:56.320 "io_qpairs": 196, 00:15:56.320 "current_admin_qpairs": 0, 00:15:56.320 "current_io_qpairs": 0, 00:15:56.320 "pending_bdev_io": 0, 00:15:56.320 "completed_nvme_io": 299, 00:15:56.320 "transports": [ 00:15:56.320 { 00:15:56.320 "trtype": "TCP" 00:15:56.320 } 00:15:56.320 ] 00:15:56.320 }, 00:15:56.320 { 00:15:56.320 "name": "nvmf_tgt_poll_group_001", 00:15:56.320 "admin_qpairs": 2, 00:15:56.320 "io_qpairs": 196, 00:15:56.320 "current_admin_qpairs": 0, 00:15:56.321 "current_io_qpairs": 0, 00:15:56.321 "pending_bdev_io": 0, 00:15:56.321 "completed_nvme_io": 246, 00:15:56.321 "transports": [ 00:15:56.321 { 00:15:56.321 "trtype": "TCP" 00:15:56.321 } 00:15:56.321 ] 00:15:56.321 }, 00:15:56.321 { 00:15:56.321 "name": "nvmf_tgt_poll_group_002", 00:15:56.321 "admin_qpairs": 1, 00:15:56.321 "io_qpairs": 196, 00:15:56.321 "current_admin_qpairs": 0, 00:15:56.321 "current_io_qpairs": 0, 00:15:56.321 "pending_bdev_io": 0, 00:15:56.321 "completed_nvme_io": 289, 00:15:56.321 "transports": [ 00:15:56.321 { 00:15:56.321 "trtype": "TCP" 00:15:56.321 } 00:15:56.321 ] 00:15:56.321 }, 00:15:56.321 { 00:15:56.321 "name": "nvmf_tgt_poll_group_003", 00:15:56.321 "admin_qpairs": 2, 00:15:56.321 "io_qpairs": 196, 00:15:56.321 "current_admin_qpairs": 0, 00:15:56.321 "current_io_qpairs": 0, 00:15:56.321 "pending_bdev_io": 0, 00:15:56.321 "completed_nvme_io": 300, 00:15:56.321 "transports": [ 00:15:56.321 { 00:15:56.321 "trtype": "TCP" 00:15:56.321 } 00:15:56.321 ] 00:15:56.321 } 00:15:56.321 ] 00:15:56.321 }' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.321 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.321 rmmod nvme_tcp 00:15:56.321 rmmod nvme_fabrics 00:15:56.321 rmmod nvme_keyring 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1328413 ']' 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1328413 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 1328413 ']' 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 1328413 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1328413 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1328413' 00:15:56.581 killing process with pid 1328413 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 1328413 00:15:56.581 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 1328413 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.840 13:43:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.748 13:43:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.748 00:15:58.748 real 0m36.008s 00:15:58.748 user 1m46.925s 00:15:58.748 sys 0m8.427s 00:15:58.748 13:43:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:58.748 13:43:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:58.748 ************************************ 00:15:58.748 END TEST nvmf_rpc 00:15:58.748 ************************************ 00:15:58.748 13:43:51 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:58.748 13:43:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:58.748 13:43:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:58.748 13:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.008 ************************************ 00:15:59.008 START TEST nvmf_invalid 00:15:59.008 ************************************ 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:59.008 * Looking for test storage... 00:15:59.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 13:43:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.009 13:43:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:07.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:07.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:07.133 Found net devices under 0000:af:00.0: cvl_0_0 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:07.133 Found net devices under 0000:af:00.1: cvl_0_1 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.133 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:16:07.134 00:16:07.134 --- 10.0.0.2 ping statistics --- 00:16:07.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.134 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:16:07.134 00:16:07.134 --- 10.0.0.1 ping statistics --- 00:16:07.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.134 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1336746 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1336746 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 1336746 ']' 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:07.134 13:43:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 [2024-06-11 13:43:58.940024] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:16:07.134 [2024-06-11 13:43:58.940081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.134 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.134 [2024-06-11 13:43:59.047192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.134 [2024-06-11 13:43:59.132388] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.134 [2024-06-11 13:43:59.132436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.134 [2024-06-11 13:43:59.132449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.134 [2024-06-11 13:43:59.132461] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.134 [2024-06-11 13:43:59.132471] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.134 [2024-06-11 13:43:59.132538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.134 [2024-06-11 13:43:59.132630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.134 [2024-06-11 13:43:59.132742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.134 [2024-06-11 13:43:59.132742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:07.134 13:43:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6687 00:16:07.393 [2024-06-11 13:44:00.057182] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:07.393 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:07.393 { 00:16:07.393 "nqn": "nqn.2016-06.io.spdk:cnode6687", 00:16:07.393 "tgt_name": "foobar", 00:16:07.393 "method": "nvmf_create_subsystem", 00:16:07.393 "req_id": 1 00:16:07.393 } 00:16:07.393 Got JSON-RPC error response 00:16:07.393 response: 00:16:07.393 { 00:16:07.393 "code": -32603, 00:16:07.393 "message": "Unable to find target foobar" 00:16:07.393 }' 00:16:07.393 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:07.393 { 00:16:07.393 "nqn": "nqn.2016-06.io.spdk:cnode6687", 00:16:07.393 "tgt_name": "foobar", 00:16:07.393 "method": "nvmf_create_subsystem", 00:16:07.393 "req_id": 1 00:16:07.393 } 00:16:07.394 Got JSON-RPC error response 00:16:07.394 response: 00:16:07.394 { 00:16:07.394 "code": -32603, 00:16:07.394 "message": "Unable to find target foobar" 00:16:07.394 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:07.394 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:07.394 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26039 00:16:07.651 [2024-06-11 13:44:00.306131] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26039: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:07.651 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:07.651 { 00:16:07.651 "nqn": "nqn.2016-06.io.spdk:cnode26039", 00:16:07.651 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:07.651 "method": "nvmf_create_subsystem", 00:16:07.651 "req_id": 1 00:16:07.651 } 00:16:07.651 Got JSON-RPC error response 00:16:07.651 response: 00:16:07.651 { 00:16:07.651 "code": -32602, 00:16:07.651 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:07.651 }' 00:16:07.651 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:07.651 { 00:16:07.651 "nqn": "nqn.2016-06.io.spdk:cnode26039", 00:16:07.651 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:07.651 "method": "nvmf_create_subsystem", 00:16:07.651 "req_id": 1 00:16:07.652 } 00:16:07.652 Got JSON-RPC error response 00:16:07.652 response: 00:16:07.652 { 00:16:07.652 "code": -32602, 00:16:07.652 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:07.652 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:07.652 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:07.652 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13255 00:16:07.652 [2024-06-11 13:44:00.550904] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13255: invalid model number 'SPDK_Controller' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:07.911 { 00:16:07.911 "nqn": "nqn.2016-06.io.spdk:cnode13255", 00:16:07.911 "model_number": "SPDK_Controller\u001f", 00:16:07.911 "method": "nvmf_create_subsystem", 00:16:07.911 "req_id": 1 00:16:07.911 } 00:16:07.911 Got JSON-RPC error response 00:16:07.911 response: 00:16:07.911 { 00:16:07.911 "code": -32602, 00:16:07.911 "message": "Invalid MN SPDK_Controller\u001f" 00:16:07.911 }' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:07.911 { 00:16:07.911 "nqn": "nqn.2016-06.io.spdk:cnode13255", 00:16:07.911 "model_number": "SPDK_Controller\u001f", 00:16:07.911 "method": "nvmf_create_subsystem", 00:16:07.911 "req_id": 1 00:16:07.911 } 00:16:07.911 Got JSON-RPC error response 00:16:07.911 response: 00:16:07.911 { 00:16:07.911 "code": -32602, 00:16:07.911 "message": "Invalid MN SPDK_Controller\u001f" 00:16:07.911 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.911 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q[dCgqkcNhn#*]~=Manet' 00:16:07.912 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Q[dCgqkcNhn#*]~=Manet' nqn.2016-06.io.spdk:cnode30961 00:16:08.172 [2024-06-11 13:44:00.964306] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30961: invalid serial number 'Q[dCgqkcNhn#*]~=Manet' 00:16:08.172 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:08.172 { 00:16:08.172 "nqn": "nqn.2016-06.io.spdk:cnode30961", 00:16:08.172 "serial_number": "Q[dCgqkcNhn#*]~=Manet", 00:16:08.172 "method": "nvmf_create_subsystem", 00:16:08.172 "req_id": 1 00:16:08.172 } 00:16:08.172 Got JSON-RPC error response 00:16:08.172 response: 00:16:08.172 { 00:16:08.172 "code": -32602, 00:16:08.172 "message": "Invalid SN Q[dCgqkcNhn#*]~=Manet" 00:16:08.172 }' 00:16:08.172 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:08.172 { 00:16:08.172 "nqn": "nqn.2016-06.io.spdk:cnode30961", 00:16:08.172 "serial_number": "Q[dCgqkcNhn#*]~=Manet", 00:16:08.172 "method": "nvmf_create_subsystem", 00:16:08.172 "req_id": 1 00:16:08.172 } 00:16:08.172 Got JSON-RPC error response 00:16:08.172 response: 00:16:08.172 { 00:16:08.172 "code": -32602, 00:16:08.172 "message": "Invalid SN Q[dCgqkcNhn#*]~=Manet" 00:16:08.172 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:08.172 13:44:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.172 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:08.432 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q8IqL>.ktCb>$V("~xuqm!\ c5 eS[[/) wsc0~I' 00:16:08.433 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Q8IqL>.ktCb>$V("~xuqm!\ c5 eS[[/) wsc0~I' nqn.2016-06.io.spdk:cnode6172 00:16:08.693 [2024-06-11 13:44:01.518259] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6172: invalid model number 'Q8IqL>.ktCb>$V("~xuqm!\ c5 eS[[/) wsc0~I' 00:16:08.693 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:08.693 { 00:16:08.693 "nqn": "nqn.2016-06.io.spdk:cnode6172", 00:16:08.693 "model_number": "Q8IqL>.\u007fktCb>$V(\"~xuqm!\\ c5 eS[[/) wsc0~I", 00:16:08.693 "method": "nvmf_create_subsystem", 00:16:08.693 "req_id": 1 00:16:08.693 } 00:16:08.693 Got JSON-RPC error response 00:16:08.693 response: 00:16:08.693 { 00:16:08.693 "code": -32602, 00:16:08.693 "message": "Invalid MN Q8IqL>.\u007fktCb>$V(\"~xuqm!\\ c5 eS[[/) wsc0~I" 00:16:08.693 }' 00:16:08.693 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:08.693 { 00:16:08.693 "nqn": "nqn.2016-06.io.spdk:cnode6172", 00:16:08.693 "model_number": "Q8IqL>.\u007fktCb>$V(\"~xuqm!\\ c5 eS[[/) wsc0~I", 00:16:08.693 "method": "nvmf_create_subsystem", 00:16:08.693 "req_id": 1 00:16:08.693 } 00:16:08.693 Got JSON-RPC error response 00:16:08.693 response: 00:16:08.693 { 00:16:08.693 "code": -32602, 00:16:08.693 "message": "Invalid MN Q8IqL>.\u007fktCb>$V(\"~xuqm!\\ c5 eS[[/) wsc0~I" 00:16:08.693 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:08.693 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:08.952 [2024-06-11 13:44:01.702981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.952 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:09.211 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:09.211 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:09.211 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:09.211 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:09.211 13:44:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:09.469 [2024-06-11 13:44:02.192697] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:09.469 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:09.469 { 00:16:09.469 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:09.469 "listen_address": { 00:16:09.469 "trtype": "tcp", 00:16:09.469 "traddr": "", 00:16:09.469 "trsvcid": "4421" 00:16:09.469 }, 00:16:09.469 "method": "nvmf_subsystem_remove_listener", 00:16:09.469 "req_id": 1 00:16:09.469 } 00:16:09.469 Got JSON-RPC error response 00:16:09.469 response: 00:16:09.469 { 00:16:09.469 "code": -32602, 00:16:09.469 "message": "Invalid parameters" 00:16:09.469 }' 00:16:09.469 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:09.469 { 00:16:09.469 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:09.469 "listen_address": { 00:16:09.469 "trtype": "tcp", 00:16:09.469 "traddr": "", 00:16:09.469 "trsvcid": "4421" 00:16:09.469 }, 00:16:09.469 "method": "nvmf_subsystem_remove_listener", 00:16:09.469 "req_id": 1 00:16:09.469 } 00:16:09.469 Got JSON-RPC error response 00:16:09.469 response: 00:16:09.469 { 00:16:09.469 "code": -32602, 00:16:09.469 "message": "Invalid parameters" 00:16:09.469 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:09.469 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30109 -i 0 00:16:09.727 [2024-06-11 13:44:02.429499] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30109: invalid cntlid range [0-65519] 00:16:09.727 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:09.727 { 00:16:09.727 "nqn": "nqn.2016-06.io.spdk:cnode30109", 00:16:09.727 "min_cntlid": 0, 00:16:09.727 "method": "nvmf_create_subsystem", 00:16:09.727 "req_id": 1 00:16:09.727 } 00:16:09.727 Got JSON-RPC error response 00:16:09.727 response: 00:16:09.727 { 00:16:09.727 "code": -32602, 00:16:09.727 "message": "Invalid cntlid range [0-65519]" 00:16:09.727 }' 00:16:09.727 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:09.727 { 00:16:09.727 "nqn": "nqn.2016-06.io.spdk:cnode30109", 00:16:09.727 "min_cntlid": 0, 00:16:09.727 "method": "nvmf_create_subsystem", 00:16:09.727 "req_id": 1 00:16:09.727 } 00:16:09.727 Got JSON-RPC error response 00:16:09.727 response: 00:16:09.727 { 00:16:09.727 "code": -32602, 00:16:09.727 "message": "Invalid cntlid range [0-65519]" 00:16:09.727 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.727 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26482 -i 65520 00:16:09.986 [2024-06-11 13:44:02.674339] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26482: invalid cntlid range [65520-65519] 00:16:09.986 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:09.986 { 00:16:09.987 "nqn": "nqn.2016-06.io.spdk:cnode26482", 00:16:09.987 "min_cntlid": 65520, 00:16:09.987 "method": "nvmf_create_subsystem", 00:16:09.987 "req_id": 1 00:16:09.987 } 00:16:09.987 Got JSON-RPC error response 00:16:09.987 response: 00:16:09.987 { 00:16:09.987 "code": -32602, 00:16:09.987 "message": "Invalid cntlid range [65520-65519]" 00:16:09.987 }' 00:16:09.987 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:09.987 { 00:16:09.987 "nqn": "nqn.2016-06.io.spdk:cnode26482", 00:16:09.987 "min_cntlid": 65520, 00:16:09.987 "method": "nvmf_create_subsystem", 00:16:09.987 "req_id": 1 00:16:09.987 } 00:16:09.987 Got JSON-RPC error response 00:16:09.987 response: 00:16:09.987 { 00:16:09.987 "code": -32602, 00:16:09.987 "message": "Invalid cntlid range [65520-65519]" 00:16:09.987 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:09.987 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15041 -I 0 00:16:10.246 [2024-06-11 13:44:02.911199] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15041: invalid cntlid range [1-0] 00:16:10.246 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:10.246 { 00:16:10.246 "nqn": "nqn.2016-06.io.spdk:cnode15041", 00:16:10.246 "max_cntlid": 0, 00:16:10.246 "method": "nvmf_create_subsystem", 00:16:10.246 "req_id": 1 00:16:10.246 } 00:16:10.246 Got JSON-RPC error response 00:16:10.246 response: 00:16:10.246 { 00:16:10.246 "code": -32602, 00:16:10.246 "message": "Invalid cntlid range [1-0]" 00:16:10.246 }' 00:16:10.246 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:10.246 { 00:16:10.246 "nqn": "nqn.2016-06.io.spdk:cnode15041", 00:16:10.246 "max_cntlid": 0, 00:16:10.246 "method": "nvmf_create_subsystem", 00:16:10.246 "req_id": 1 00:16:10.246 } 00:16:10.246 Got JSON-RPC error response 00:16:10.246 response: 00:16:10.246 { 00:16:10.246 "code": -32602, 00:16:10.246 "message": "Invalid cntlid range [1-0]" 00:16:10.246 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:10.246 13:44:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode337 -I 65520 00:16:10.246 [2024-06-11 13:44:03.139973] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode337: invalid cntlid range [1-65520] 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:10.506 { 00:16:10.506 "nqn": "nqn.2016-06.io.spdk:cnode337", 00:16:10.506 "max_cntlid": 65520, 00:16:10.506 "method": "nvmf_create_subsystem", 00:16:10.506 "req_id": 1 00:16:10.506 } 00:16:10.506 Got JSON-RPC error response 00:16:10.506 response: 00:16:10.506 { 00:16:10.506 "code": -32602, 00:16:10.506 "message": "Invalid cntlid range [1-65520]" 00:16:10.506 }' 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:10.506 { 00:16:10.506 "nqn": "nqn.2016-06.io.spdk:cnode337", 00:16:10.506 "max_cntlid": 65520, 00:16:10.506 "method": "nvmf_create_subsystem", 00:16:10.506 "req_id": 1 00:16:10.506 } 00:16:10.506 Got JSON-RPC error response 00:16:10.506 response: 00:16:10.506 { 00:16:10.506 "code": -32602, 00:16:10.506 "message": "Invalid cntlid range [1-65520]" 00:16:10.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19861 -i 6 -I 5 00:16:10.506 [2024-06-11 13:44:03.380800] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19861: invalid cntlid range [6-5] 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:10.506 { 00:16:10.506 "nqn": "nqn.2016-06.io.spdk:cnode19861", 00:16:10.506 "min_cntlid": 6, 00:16:10.506 "max_cntlid": 5, 00:16:10.506 "method": "nvmf_create_subsystem", 00:16:10.506 "req_id": 1 00:16:10.506 } 00:16:10.506 Got JSON-RPC error response 00:16:10.506 response: 00:16:10.506 { 00:16:10.506 "code": -32602, 00:16:10.506 "message": "Invalid cntlid range [6-5]" 00:16:10.506 }' 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:10.506 { 00:16:10.506 "nqn": "nqn.2016-06.io.spdk:cnode19861", 00:16:10.506 "min_cntlid": 6, 00:16:10.506 "max_cntlid": 5, 00:16:10.506 "method": "nvmf_create_subsystem", 00:16:10.506 "req_id": 1 00:16:10.506 } 00:16:10.506 Got JSON-RPC error response 00:16:10.506 response: 00:16:10.506 { 00:16:10.506 "code": -32602, 00:16:10.506 "message": "Invalid cntlid range [6-5]" 00:16:10.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:10.506 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:10.766 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:10.766 { 00:16:10.766 "name": "foobar", 00:16:10.766 "method": "nvmf_delete_target", 00:16:10.766 "req_id": 1 00:16:10.766 } 00:16:10.766 Got JSON-RPC error response 00:16:10.767 response: 00:16:10.767 { 00:16:10.767 "code": -32602, 00:16:10.767 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:10.767 }' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:10.767 { 00:16:10.767 "name": "foobar", 00:16:10.767 "method": "nvmf_delete_target", 00:16:10.767 "req_id": 1 00:16:10.767 } 00:16:10.767 Got JSON-RPC error response 00:16:10.767 response: 00:16:10.767 { 00:16:10.767 "code": -32602, 00:16:10.767 "message": "The specified target doesn't exist, cannot delete it." 00:16:10.767 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.767 rmmod nvme_tcp 00:16:10.767 rmmod nvme_fabrics 00:16:10.767 rmmod nvme_keyring 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1336746 ']' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1336746 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 1336746 ']' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 1336746 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1336746 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1336746' 00:16:10.767 killing process with pid 1336746 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 1336746 00:16:10.767 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 1336746 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.027 13:44:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.565 13:44:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.565 00:16:13.565 real 0m14.257s 00:16:13.565 user 0m23.466s 00:16:13.565 sys 0m6.768s 00:16:13.565 13:44:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:13.565 13:44:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 ************************************ 00:16:13.565 END TEST nvmf_invalid 00:16:13.565 ************************************ 00:16:13.565 13:44:05 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:13.565 13:44:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:13.565 13:44:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:13.565 13:44:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.565 ************************************ 00:16:13.565 START TEST nvmf_abort 00:16:13.565 ************************************ 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:13.565 * Looking for test storage... 00:16:13.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.565 13:44:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.566 13:44:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:20.184 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:20.184 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.184 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:20.185 Found net devices under 0000:af:00.0: cvl_0_0 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:20.185 Found net devices under 0000:af:00.1: cvl_0_1 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.185 13:44:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.185 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.185 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:20.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:16:20.444 00:16:20.444 --- 10.0.0.2 ping statistics --- 00:16:20.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.444 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:16:20.444 00:16:20.444 --- 10.0.0.1 ping statistics --- 00:16:20.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.444 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.444 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1341498 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1341498 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 1341498 ']' 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:20.445 13:44:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:20.445 [2024-06-11 13:44:13.220393] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:16:20.445 [2024-06-11 13:44:13.220456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.445 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.445 [2024-06-11 13:44:13.318497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:20.704 [2024-06-11 13:44:13.405878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.704 [2024-06-11 13:44:13.405921] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.704 [2024-06-11 13:44:13.405934] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.704 [2024-06-11 13:44:13.405946] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.704 [2024-06-11 13:44:13.405956] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.704 [2024-06-11 13:44:13.406059] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.704 [2024-06-11 13:44:13.406175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.704 [2024-06-11 13:44:13.406175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 [2024-06-11 13:44:14.126897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 Malloc0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 Delay0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 [2024-06-11 13:44:14.203697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:21.343 13:44:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:21.602 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.602 [2024-06-11 13:44:14.335521] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:24.136 Initializing NVMe Controllers 00:16:24.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:24.136 controller IO queue size 128 less than required 00:16:24.136 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:24.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:24.136 Initialization complete. Launching workers. 00:16:24.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31380 00:16:24.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31441, failed to submit 62 00:16:24.136 success 31384, unsuccess 57, failed 0 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.136 rmmod nvme_tcp 00:16:24.136 rmmod nvme_fabrics 00:16:24.136 rmmod nvme_keyring 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1341498 ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 1341498 ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1341498' 00:16:24.136 killing process with pid 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 1341498 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.136 13:44:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.042 13:44:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.042 00:16:26.042 real 0m12.921s 00:16:26.042 user 0m13.932s 00:16:26.042 sys 0m6.502s 00:16:26.042 13:44:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:26.042 13:44:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 ************************************ 00:16:26.042 END TEST nvmf_abort 00:16:26.042 ************************************ 00:16:26.301 13:44:18 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:26.301 13:44:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:26.301 13:44:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:26.301 13:44:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.301 ************************************ 00:16:26.301 START TEST nvmf_ns_hotplug_stress 00:16:26.301 ************************************ 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:26.301 * Looking for test storage... 00:16:26.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.301 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.302 13:44:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.428 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.428 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.428 13:44:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:16:34.428 00:16:34.428 --- 10.0.0.2 ping statistics --- 00:16:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.428 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:16:34.428 00:16:34.428 --- 10.0.0.1 ping statistics --- 00:16:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.428 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1345810 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1345810 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 1345810 ']' 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:34.428 13:44:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 [2024-06-11 13:44:26.313110] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:16:34.428 [2024-06-11 13:44:26.313174] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.428 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.428 [2024-06-11 13:44:26.410685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.428 [2024-06-11 13:44:26.497135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.428 [2024-06-11 13:44:26.497177] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.428 [2024-06-11 13:44:26.497190] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.428 [2024-06-11 13:44:26.497202] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.428 [2024-06-11 13:44:26.497212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.428 [2024-06-11 13:44:26.497315] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.428 [2024-06-11 13:44:26.497431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.428 [2024-06-11 13:44:26.497431] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:34.428 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.686 [2024-06-11 13:44:27.483096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.686 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:34.944 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.203 [2024-06-11 13:44:27.962077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.203 13:44:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:35.462 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:35.721 Malloc0 00:16:35.721 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:35.721 Delay0 00:16:35.980 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.980 13:44:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:36.239 NULL1 00:16:36.240 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:36.497 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1346371 00:16:36.498 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:36.498 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:36.498 13:44:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.876 Read completed with error (sct=0, sc=11) 00:16:37.876 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.877 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:37.877 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:38.136 true 00:16:38.136 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:38.136 13:44:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.072 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.072 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:39.072 13:44:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:39.333 true 00:16:39.333 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:39.333 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.592 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.851 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:39.851 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:40.109 true 00:16:40.109 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:40.109 13:44:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.046 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.046 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:41.046 13:44:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:41.305 true 00:16:41.305 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:41.305 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.564 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.823 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:41.823 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:42.081 true 00:16:42.081 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:42.081 13:44:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.018 13:44:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.276 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:43.276 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:43.534 true 00:16:43.534 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:43.534 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.792 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.051 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:44.051 13:44:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:44.310 true 00:16:44.310 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:44.310 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.246 13:44:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:45.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:45.246 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:45.246 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:45.504 true 00:16:45.505 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:45.505 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.763 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.022 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:46.022 13:44:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:46.312 true 00:16:46.312 13:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:46.312 13:44:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.285 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.544 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:47.544 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:47.804 true 00:16:47.804 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:47.804 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.063 13:44:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.323 13:44:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:48.323 13:44:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:48.582 true 00:16:48.582 13:44:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:48.582 13:44:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.521 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.521 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:49.521 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:49.780 true 00:16:49.780 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:49.780 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.039 13:44:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.299 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:50.299 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:50.557 true 00:16:50.557 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:50.557 13:44:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:51.498 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:51.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:51.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:51.757 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:51.757 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:51.757 true 00:16:52.015 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:52.015 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.015 13:44:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:52.274 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:52.275 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:52.534 true 00:16:52.534 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:52.534 13:44:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.913 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:53.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:53.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:53.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:53.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:53.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:53.913 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:53.913 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:54.172 true 00:16:54.172 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:54.172 13:44:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.110 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:55.110 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:55.110 13:44:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:55.368 true 00:16:55.368 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:55.368 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.627 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:55.886 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:55.886 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:56.145 true 00:16:56.145 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:56.145 13:44:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:57.083 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:57.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:57.083 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:57.083 13:44:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:57.342 true 00:16:57.342 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:57.342 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.601 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:57.861 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:57.861 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:58.120 true 00:16:58.120 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:58.120 13:44:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.058 13:44:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:59.316 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:59.316 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:59.573 true 00:16:59.573 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:16:59.573 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.831 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:00.092 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:17:00.092 13:44:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:00.351 true 00:17:00.351 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:00.351 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.289 13:44:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:01.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:01.289 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:17:01.289 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:01.548 true 00:17:01.548 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:01.548 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.807 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:02.066 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:17:02.066 13:44:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:02.325 true 00:17:02.325 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:02.325 13:44:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.299 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:03.557 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:17:03.557 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:03.815 true 00:17:03.815 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:03.815 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.074 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.074 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:17:04.074 13:44:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:04.333 true 00:17:04.333 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:04.333 13:44:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.710 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.710 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:17:05.710 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:05.969 true 00:17:05.969 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:05.969 13:44:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:06.795 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.795 Initializing NVMe Controllers 00:17:06.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.795 Controller IO queue size 128, less than required. 00:17:06.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.795 Controller IO queue size 128, less than required. 00:17:06.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:06.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:06.796 Initialization complete. Launching workers. 00:17:06.796 ======================================================== 00:17:06.796 Latency(us) 00:17:06.796 Device Information : IOPS MiB/s Average min max 00:17:06.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 974.23 0.48 79114.64 2882.92 1049482.63 00:17:06.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16328.55 7.97 7838.80 2107.75 502997.93 00:17:06.796 ======================================================== 00:17:06.796 Total : 17302.78 8.45 11851.98 2107.75 1049482.63 00:17:06.796 00:17:06.796 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:17:06.796 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:07.055 true 00:17:07.055 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1346371 00:17:07.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1346371) - No such process 00:17:07.055 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1346371 00:17:07.055 13:44:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.314 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:07.574 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:17:07.574 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:17:07.574 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:17:07.574 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:07.574 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:17:07.833 null0 00:17:07.833 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:07.833 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:07.833 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:17:08.092 null1 00:17:08.092 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:08.092 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:08.092 13:45:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:17:08.092 null2 00:17:08.092 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:08.092 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:08.092 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:17:08.352 null3 00:17:08.352 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:08.352 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:08.352 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:17:08.611 null4 00:17:08.611 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:08.611 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:08.611 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:17:08.869 null5 00:17:08.869 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:08.869 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:08.869 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:17:09.128 null6 00:17:09.128 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:09.128 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:09.128 13:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:17:09.388 null7 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1352105 1352107 1352109 1352113 1352116 1352118 1352120 1352123 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.388 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:09.648 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.907 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:10.166 13:45:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:10.166 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.167 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.426 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.685 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:10.944 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.944 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.944 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:10.944 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.944 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:10.945 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.205 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:11.464 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:11.723 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.723 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.724 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:11.983 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:12.243 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:12.503 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:12.763 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.023 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:13.283 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.283 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.283 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:13.283 13:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.283 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.543 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:13.803 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.063 13:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.063 rmmod nvme_tcp 00:17:14.322 rmmod nvme_fabrics 00:17:14.322 rmmod nvme_keyring 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1345810 ']' 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1345810 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 1345810 ']' 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 1345810 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1345810 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1345810' 00:17:14.322 killing process with pid 1345810 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 1345810 00:17:14.322 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 1345810 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.582 13:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.489 13:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.489 00:17:16.489 real 0m50.344s 00:17:16.489 user 3m18.741s 00:17:16.490 sys 0m21.516s 00:17:16.490 13:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:16.490 13:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.490 ************************************ 00:17:16.490 END TEST nvmf_ns_hotplug_stress 00:17:16.490 ************************************ 00:17:16.749 13:45:09 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:16.749 13:45:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:16.749 13:45:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:16.749 13:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.749 ************************************ 00:17:16.749 START TEST nvmf_connect_stress 00:17:16.749 ************************************ 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:16.749 * Looking for test storage... 00:17:16.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.749 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.750 13:45:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:23.323 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:23.323 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:23.323 Found net devices under 0000:af:00.0: cvl_0_0 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.323 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:23.324 Found net devices under 0000:af:00.1: cvl_0_1 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.324 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:17:23.583 00:17:23.583 --- 10.0.0.2 ping statistics --- 00:17:23.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.583 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:23.583 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:17:23.843 00:17:23.843 --- 10.0.0.1 ping statistics --- 00:17:23.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.843 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1357245 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1357245 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 1357245 ']' 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:23.843 13:45:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.843 [2024-06-11 13:45:16.599594] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:23.843 [2024-06-11 13:45:16.599656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.843 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.843 [2024-06-11 13:45:16.698847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:24.102 [2024-06-11 13:45:16.782209] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.102 [2024-06-11 13:45:16.782255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.102 [2024-06-11 13:45:16.782268] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.102 [2024-06-11 13:45:16.782280] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.102 [2024-06-11 13:45:16.782290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.102 [2024-06-11 13:45:16.782402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.102 [2024-06-11 13:45:16.782523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.102 [2024-06-11 13:45:16.782524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.705 [2024-06-11 13:45:17.506443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.705 [2024-06-11 13:45:17.538631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.705 NULL1 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1357295 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.705 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.706 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.706 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.706 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.706 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.706 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.964 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.965 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.222 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.223 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:25.223 13:45:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.223 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.223 13:45:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.480 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.480 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:25.480 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.480 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.480 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.739 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.739 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:25.739 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.739 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.739 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.307 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.307 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:26.307 13:45:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.307 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.307 13:45:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.566 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:26.566 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.566 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.566 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.825 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:26.825 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.825 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.825 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.084 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:27.084 13:45:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.084 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.084 13:45:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.652 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.652 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:27.652 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.652 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.652 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.911 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.912 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:27.912 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.912 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.912 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.170 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.170 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:28.170 13:45:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.170 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.170 13:45:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.429 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.429 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:28.429 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.429 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.429 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.687 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:28.687 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.687 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.687 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.256 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.256 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:29.256 13:45:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.257 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.257 13:45:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.515 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.515 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:29.515 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.515 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.515 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.773 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.773 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:29.773 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.773 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.773 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.032 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.032 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:30.032 13:45:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.032 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.032 13:45:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.290 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.290 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:30.290 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.290 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.290 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.856 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.856 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:30.856 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.857 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.857 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.116 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.116 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:31.116 13:45:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.116 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.116 13:45:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.374 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.374 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:31.374 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.374 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.374 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.633 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.633 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:31.633 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.633 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.633 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.202 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.202 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:32.202 13:45:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.202 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.202 13:45:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.461 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.461 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:32.461 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.461 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.461 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.719 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.719 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:32.719 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.719 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.719 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.978 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.978 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:32.978 13:45:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.978 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.978 13:45:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.237 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.237 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:33.237 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.237 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.237 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.804 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.804 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:33.804 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.804 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.804 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.062 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.062 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:34.062 13:45:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.062 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.062 13:45:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.321 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.321 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:34.321 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.321 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.321 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.580 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.580 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:34.580 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.581 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.581 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.839 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1357295 00:17:35.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1357295) - No such process 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1357295 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:35.098 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.099 rmmod nvme_tcp 00:17:35.099 rmmod nvme_fabrics 00:17:35.099 rmmod nvme_keyring 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1357245 ']' 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1357245 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 1357245 ']' 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 1357245 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1357245 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1357245' 00:17:35.099 killing process with pid 1357245 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 1357245 00:17:35.099 13:45:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 1357245 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.358 13:45:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.264 13:45:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.264 00:17:37.264 real 0m20.701s 00:17:37.264 user 0m40.904s 00:17:37.264 sys 0m10.202s 00:17:37.264 13:45:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:37.264 13:45:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 ************************************ 00:17:37.264 END TEST nvmf_connect_stress 00:17:37.264 ************************************ 00:17:37.524 13:45:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:37.524 13:45:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:37.524 13:45:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:37.524 13:45:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.524 ************************************ 00:17:37.524 START TEST nvmf_fused_ordering 00:17:37.524 ************************************ 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:37.524 * Looking for test storage... 00:17:37.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.524 13:45:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.525 13:45:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.649 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:45.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:45.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:45.650 Found net devices under 0000:af:00.0: cvl_0_0 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:45.650 Found net devices under 0000:af:00.1: cvl_0_1 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:45.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:17:45.650 00:17:45.650 --- 10.0.0.2 ping statistics --- 00:17:45.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.650 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:17:45.650 00:17:45.650 --- 10.0.0.1 ping statistics --- 00:17:45.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.650 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:45.650 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1362844 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1362844 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 1362844 ']' 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:45.651 13:45:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.651 [2024-06-11 13:45:37.619745] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:45.651 [2024-06-11 13:45:37.619805] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.651 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.651 [2024-06-11 13:45:37.719069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.651 [2024-06-11 13:45:37.803945] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.651 [2024-06-11 13:45:37.803989] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.651 [2024-06-11 13:45:37.804003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.651 [2024-06-11 13:45:37.804014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.651 [2024-06-11 13:45:37.804024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.651 [2024-06-11 13:45:37.804052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.651 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:45.651 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:17:45.651 13:45:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:45.651 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:45.651 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 [2024-06-11 13:45:38.576359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 [2024-06-11 13:45:38.592571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 NULL1 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.910 13:45:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:45.910 [2024-06-11 13:45:38.646301] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:45.910 [2024-06-11 13:45:38.646338] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363123 ] 00:17:45.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.542 Attached to nqn.2016-06.io.spdk:cnode1 00:17:46.542 Namespace ID: 1 size: 1GB 00:17:46.542 fused_ordering(0) 00:17:46.542 fused_ordering(1) 00:17:46.542 fused_ordering(2) 00:17:46.542 fused_ordering(3) 00:17:46.542 fused_ordering(4) 00:17:46.542 fused_ordering(5) 00:17:46.542 fused_ordering(6) 00:17:46.542 fused_ordering(7) 00:17:46.542 fused_ordering(8) 00:17:46.542 fused_ordering(9) 00:17:46.542 fused_ordering(10) 00:17:46.542 fused_ordering(11) 00:17:46.542 fused_ordering(12) 00:17:46.542 fused_ordering(13) 00:17:46.542 fused_ordering(14) 00:17:46.542 fused_ordering(15) 00:17:46.542 fused_ordering(16) 00:17:46.542 fused_ordering(17) 00:17:46.542 fused_ordering(18) 00:17:46.542 fused_ordering(19) 00:17:46.542 fused_ordering(20) 00:17:46.542 fused_ordering(21) 00:17:46.542 fused_ordering(22) 00:17:46.542 fused_ordering(23) 00:17:46.542 fused_ordering(24) 00:17:46.542 fused_ordering(25) 00:17:46.542 fused_ordering(26) 00:17:46.542 fused_ordering(27) 00:17:46.542 fused_ordering(28) 00:17:46.542 fused_ordering(29) 00:17:46.542 fused_ordering(30) 00:17:46.542 fused_ordering(31) 00:17:46.542 fused_ordering(32) 00:17:46.542 fused_ordering(33) 00:17:46.542 fused_ordering(34) 00:17:46.542 fused_ordering(35) 00:17:46.542 fused_ordering(36) 00:17:46.542 fused_ordering(37) 00:17:46.542 fused_ordering(38) 00:17:46.542 fused_ordering(39) 00:17:46.542 fused_ordering(40) 00:17:46.542 fused_ordering(41) 00:17:46.542 fused_ordering(42) 00:17:46.542 fused_ordering(43) 00:17:46.542 fused_ordering(44) 00:17:46.542 fused_ordering(45) 00:17:46.542 fused_ordering(46) 00:17:46.542 fused_ordering(47) 00:17:46.542 fused_ordering(48) 00:17:46.542 fused_ordering(49) 00:17:46.542 fused_ordering(50) 00:17:46.542 fused_ordering(51) 00:17:46.542 fused_ordering(52) 00:17:46.542 fused_ordering(53) 00:17:46.542 fused_ordering(54) 00:17:46.542 fused_ordering(55) 00:17:46.542 fused_ordering(56) 00:17:46.542 fused_ordering(57) 00:17:46.542 fused_ordering(58) 00:17:46.542 fused_ordering(59) 00:17:46.542 fused_ordering(60) 00:17:46.542 fused_ordering(61) 00:17:46.542 fused_ordering(62) 00:17:46.542 fused_ordering(63) 00:17:46.542 fused_ordering(64) 00:17:46.542 fused_ordering(65) 00:17:46.542 fused_ordering(66) 00:17:46.542 fused_ordering(67) 00:17:46.542 fused_ordering(68) 00:17:46.542 fused_ordering(69) 00:17:46.542 fused_ordering(70) 00:17:46.542 fused_ordering(71) 00:17:46.542 fused_ordering(72) 00:17:46.542 fused_ordering(73) 00:17:46.542 fused_ordering(74) 00:17:46.542 fused_ordering(75) 00:17:46.542 fused_ordering(76) 00:17:46.542 fused_ordering(77) 00:17:46.542 fused_ordering(78) 00:17:46.542 fused_ordering(79) 00:17:46.542 fused_ordering(80) 00:17:46.542 fused_ordering(81) 00:17:46.542 fused_ordering(82) 00:17:46.542 fused_ordering(83) 00:17:46.542 fused_ordering(84) 00:17:46.542 fused_ordering(85) 00:17:46.542 fused_ordering(86) 00:17:46.542 fused_ordering(87) 00:17:46.542 fused_ordering(88) 00:17:46.542 fused_ordering(89) 00:17:46.542 fused_ordering(90) 00:17:46.542 fused_ordering(91) 00:17:46.542 fused_ordering(92) 00:17:46.542 fused_ordering(93) 00:17:46.542 fused_ordering(94) 00:17:46.542 fused_ordering(95) 00:17:46.542 fused_ordering(96) 00:17:46.542 fused_ordering(97) 00:17:46.542 fused_ordering(98) 00:17:46.542 fused_ordering(99) 00:17:46.542 fused_ordering(100) 00:17:46.542 fused_ordering(101) 00:17:46.542 fused_ordering(102) 00:17:46.542 fused_ordering(103) 00:17:46.542 fused_ordering(104) 00:17:46.542 fused_ordering(105) 00:17:46.542 fused_ordering(106) 00:17:46.542 fused_ordering(107) 00:17:46.542 fused_ordering(108) 00:17:46.542 fused_ordering(109) 00:17:46.542 fused_ordering(110) 00:17:46.542 fused_ordering(111) 00:17:46.542 fused_ordering(112) 00:17:46.542 fused_ordering(113) 00:17:46.542 fused_ordering(114) 00:17:46.542 fused_ordering(115) 00:17:46.542 fused_ordering(116) 00:17:46.542 fused_ordering(117) 00:17:46.542 fused_ordering(118) 00:17:46.542 fused_ordering(119) 00:17:46.543 fused_ordering(120) 00:17:46.543 fused_ordering(121) 00:17:46.543 fused_ordering(122) 00:17:46.543 fused_ordering(123) 00:17:46.543 fused_ordering(124) 00:17:46.543 fused_ordering(125) 00:17:46.543 fused_ordering(126) 00:17:46.543 fused_ordering(127) 00:17:46.543 fused_ordering(128) 00:17:46.543 fused_ordering(129) 00:17:46.543 fused_ordering(130) 00:17:46.543 fused_ordering(131) 00:17:46.543 fused_ordering(132) 00:17:46.543 fused_ordering(133) 00:17:46.543 fused_ordering(134) 00:17:46.543 fused_ordering(135) 00:17:46.543 fused_ordering(136) 00:17:46.543 fused_ordering(137) 00:17:46.543 fused_ordering(138) 00:17:46.543 fused_ordering(139) 00:17:46.543 fused_ordering(140) 00:17:46.543 fused_ordering(141) 00:17:46.543 fused_ordering(142) 00:17:46.543 fused_ordering(143) 00:17:46.543 fused_ordering(144) 00:17:46.543 fused_ordering(145) 00:17:46.543 fused_ordering(146) 00:17:46.543 fused_ordering(147) 00:17:46.543 fused_ordering(148) 00:17:46.543 fused_ordering(149) 00:17:46.543 fused_ordering(150) 00:17:46.543 fused_ordering(151) 00:17:46.543 fused_ordering(152) 00:17:46.543 fused_ordering(153) 00:17:46.543 fused_ordering(154) 00:17:46.543 fused_ordering(155) 00:17:46.543 fused_ordering(156) 00:17:46.543 fused_ordering(157) 00:17:46.543 fused_ordering(158) 00:17:46.543 fused_ordering(159) 00:17:46.543 fused_ordering(160) 00:17:46.543 fused_ordering(161) 00:17:46.543 fused_ordering(162) 00:17:46.543 fused_ordering(163) 00:17:46.543 fused_ordering(164) 00:17:46.543 fused_ordering(165) 00:17:46.543 fused_ordering(166) 00:17:46.543 fused_ordering(167) 00:17:46.543 fused_ordering(168) 00:17:46.543 fused_ordering(169) 00:17:46.543 fused_ordering(170) 00:17:46.543 fused_ordering(171) 00:17:46.543 fused_ordering(172) 00:17:46.543 fused_ordering(173) 00:17:46.543 fused_ordering(174) 00:17:46.543 fused_ordering(175) 00:17:46.543 fused_ordering(176) 00:17:46.543 fused_ordering(177) 00:17:46.543 fused_ordering(178) 00:17:46.543 fused_ordering(179) 00:17:46.543 fused_ordering(180) 00:17:46.543 fused_ordering(181) 00:17:46.543 fused_ordering(182) 00:17:46.543 fused_ordering(183) 00:17:46.543 fused_ordering(184) 00:17:46.543 fused_ordering(185) 00:17:46.543 fused_ordering(186) 00:17:46.543 fused_ordering(187) 00:17:46.543 fused_ordering(188) 00:17:46.543 fused_ordering(189) 00:17:46.543 fused_ordering(190) 00:17:46.543 fused_ordering(191) 00:17:46.543 fused_ordering(192) 00:17:46.543 fused_ordering(193) 00:17:46.543 fused_ordering(194) 00:17:46.543 fused_ordering(195) 00:17:46.543 fused_ordering(196) 00:17:46.543 fused_ordering(197) 00:17:46.543 fused_ordering(198) 00:17:46.543 fused_ordering(199) 00:17:46.543 fused_ordering(200) 00:17:46.543 fused_ordering(201) 00:17:46.543 fused_ordering(202) 00:17:46.543 fused_ordering(203) 00:17:46.543 fused_ordering(204) 00:17:46.543 fused_ordering(205) 00:17:46.802 fused_ordering(206) 00:17:46.802 fused_ordering(207) 00:17:46.802 fused_ordering(208) 00:17:46.802 fused_ordering(209) 00:17:46.802 fused_ordering(210) 00:17:46.802 fused_ordering(211) 00:17:46.802 fused_ordering(212) 00:17:46.802 fused_ordering(213) 00:17:46.802 fused_ordering(214) 00:17:46.802 fused_ordering(215) 00:17:46.802 fused_ordering(216) 00:17:46.802 fused_ordering(217) 00:17:46.802 fused_ordering(218) 00:17:46.802 fused_ordering(219) 00:17:46.802 fused_ordering(220) 00:17:46.802 fused_ordering(221) 00:17:46.802 fused_ordering(222) 00:17:46.802 fused_ordering(223) 00:17:46.802 fused_ordering(224) 00:17:46.802 fused_ordering(225) 00:17:46.802 fused_ordering(226) 00:17:46.802 fused_ordering(227) 00:17:46.802 fused_ordering(228) 00:17:46.802 fused_ordering(229) 00:17:46.802 fused_ordering(230) 00:17:46.802 fused_ordering(231) 00:17:46.802 fused_ordering(232) 00:17:46.802 fused_ordering(233) 00:17:46.802 fused_ordering(234) 00:17:46.802 fused_ordering(235) 00:17:46.802 fused_ordering(236) 00:17:46.802 fused_ordering(237) 00:17:46.802 fused_ordering(238) 00:17:46.802 fused_ordering(239) 00:17:46.802 fused_ordering(240) 00:17:46.802 fused_ordering(241) 00:17:46.802 fused_ordering(242) 00:17:46.802 fused_ordering(243) 00:17:46.802 fused_ordering(244) 00:17:46.802 fused_ordering(245) 00:17:46.802 fused_ordering(246) 00:17:46.802 fused_ordering(247) 00:17:46.802 fused_ordering(248) 00:17:46.802 fused_ordering(249) 00:17:46.802 fused_ordering(250) 00:17:46.802 fused_ordering(251) 00:17:46.802 fused_ordering(252) 00:17:46.802 fused_ordering(253) 00:17:46.802 fused_ordering(254) 00:17:46.802 fused_ordering(255) 00:17:46.802 fused_ordering(256) 00:17:46.802 fused_ordering(257) 00:17:46.802 fused_ordering(258) 00:17:46.802 fused_ordering(259) 00:17:46.802 fused_ordering(260) 00:17:46.802 fused_ordering(261) 00:17:46.802 fused_ordering(262) 00:17:46.802 fused_ordering(263) 00:17:46.802 fused_ordering(264) 00:17:46.802 fused_ordering(265) 00:17:46.802 fused_ordering(266) 00:17:46.802 fused_ordering(267) 00:17:46.802 fused_ordering(268) 00:17:46.802 fused_ordering(269) 00:17:46.802 fused_ordering(270) 00:17:46.802 fused_ordering(271) 00:17:46.802 fused_ordering(272) 00:17:46.802 fused_ordering(273) 00:17:46.802 fused_ordering(274) 00:17:46.802 fused_ordering(275) 00:17:46.802 fused_ordering(276) 00:17:46.802 fused_ordering(277) 00:17:46.802 fused_ordering(278) 00:17:46.802 fused_ordering(279) 00:17:46.802 fused_ordering(280) 00:17:46.802 fused_ordering(281) 00:17:46.802 fused_ordering(282) 00:17:46.802 fused_ordering(283) 00:17:46.802 fused_ordering(284) 00:17:46.802 fused_ordering(285) 00:17:46.802 fused_ordering(286) 00:17:46.802 fused_ordering(287) 00:17:46.802 fused_ordering(288) 00:17:46.802 fused_ordering(289) 00:17:46.802 fused_ordering(290) 00:17:46.802 fused_ordering(291) 00:17:46.802 fused_ordering(292) 00:17:46.802 fused_ordering(293) 00:17:46.802 fused_ordering(294) 00:17:46.802 fused_ordering(295) 00:17:46.802 fused_ordering(296) 00:17:46.802 fused_ordering(297) 00:17:46.802 fused_ordering(298) 00:17:46.802 fused_ordering(299) 00:17:46.802 fused_ordering(300) 00:17:46.802 fused_ordering(301) 00:17:46.802 fused_ordering(302) 00:17:46.802 fused_ordering(303) 00:17:46.802 fused_ordering(304) 00:17:46.802 fused_ordering(305) 00:17:46.802 fused_ordering(306) 00:17:46.802 fused_ordering(307) 00:17:46.802 fused_ordering(308) 00:17:46.802 fused_ordering(309) 00:17:46.802 fused_ordering(310) 00:17:46.802 fused_ordering(311) 00:17:46.802 fused_ordering(312) 00:17:46.802 fused_ordering(313) 00:17:46.802 fused_ordering(314) 00:17:46.802 fused_ordering(315) 00:17:46.802 fused_ordering(316) 00:17:46.802 fused_ordering(317) 00:17:46.802 fused_ordering(318) 00:17:46.802 fused_ordering(319) 00:17:46.802 fused_ordering(320) 00:17:46.802 fused_ordering(321) 00:17:46.802 fused_ordering(322) 00:17:46.802 fused_ordering(323) 00:17:46.802 fused_ordering(324) 00:17:46.802 fused_ordering(325) 00:17:46.802 fused_ordering(326) 00:17:46.802 fused_ordering(327) 00:17:46.802 fused_ordering(328) 00:17:46.802 fused_ordering(329) 00:17:46.802 fused_ordering(330) 00:17:46.802 fused_ordering(331) 00:17:46.802 fused_ordering(332) 00:17:46.802 fused_ordering(333) 00:17:46.802 fused_ordering(334) 00:17:46.802 fused_ordering(335) 00:17:46.802 fused_ordering(336) 00:17:46.802 fused_ordering(337) 00:17:46.802 fused_ordering(338) 00:17:46.802 fused_ordering(339) 00:17:46.802 fused_ordering(340) 00:17:46.802 fused_ordering(341) 00:17:46.802 fused_ordering(342) 00:17:46.802 fused_ordering(343) 00:17:46.802 fused_ordering(344) 00:17:46.802 fused_ordering(345) 00:17:46.802 fused_ordering(346) 00:17:46.802 fused_ordering(347) 00:17:46.802 fused_ordering(348) 00:17:46.802 fused_ordering(349) 00:17:46.802 fused_ordering(350) 00:17:46.802 fused_ordering(351) 00:17:46.802 fused_ordering(352) 00:17:46.802 fused_ordering(353) 00:17:46.802 fused_ordering(354) 00:17:46.802 fused_ordering(355) 00:17:46.802 fused_ordering(356) 00:17:46.802 fused_ordering(357) 00:17:46.802 fused_ordering(358) 00:17:46.802 fused_ordering(359) 00:17:46.802 fused_ordering(360) 00:17:46.802 fused_ordering(361) 00:17:46.802 fused_ordering(362) 00:17:46.802 fused_ordering(363) 00:17:46.802 fused_ordering(364) 00:17:46.802 fused_ordering(365) 00:17:46.802 fused_ordering(366) 00:17:46.802 fused_ordering(367) 00:17:46.802 fused_ordering(368) 00:17:46.802 fused_ordering(369) 00:17:46.802 fused_ordering(370) 00:17:46.802 fused_ordering(371) 00:17:46.802 fused_ordering(372) 00:17:46.802 fused_ordering(373) 00:17:46.802 fused_ordering(374) 00:17:46.802 fused_ordering(375) 00:17:46.802 fused_ordering(376) 00:17:46.802 fused_ordering(377) 00:17:46.802 fused_ordering(378) 00:17:46.802 fused_ordering(379) 00:17:46.802 fused_ordering(380) 00:17:46.802 fused_ordering(381) 00:17:46.803 fused_ordering(382) 00:17:46.803 fused_ordering(383) 00:17:46.803 fused_ordering(384) 00:17:46.803 fused_ordering(385) 00:17:46.803 fused_ordering(386) 00:17:46.803 fused_ordering(387) 00:17:46.803 fused_ordering(388) 00:17:46.803 fused_ordering(389) 00:17:46.803 fused_ordering(390) 00:17:46.803 fused_ordering(391) 00:17:46.803 fused_ordering(392) 00:17:46.803 fused_ordering(393) 00:17:46.803 fused_ordering(394) 00:17:46.803 fused_ordering(395) 00:17:46.803 fused_ordering(396) 00:17:46.803 fused_ordering(397) 00:17:46.803 fused_ordering(398) 00:17:46.803 fused_ordering(399) 00:17:46.803 fused_ordering(400) 00:17:46.803 fused_ordering(401) 00:17:46.803 fused_ordering(402) 00:17:46.803 fused_ordering(403) 00:17:46.803 fused_ordering(404) 00:17:46.803 fused_ordering(405) 00:17:46.803 fused_ordering(406) 00:17:46.803 fused_ordering(407) 00:17:46.803 fused_ordering(408) 00:17:46.803 fused_ordering(409) 00:17:46.803 fused_ordering(410) 00:17:47.370 fused_ordering(411) 00:17:47.370 fused_ordering(412) 00:17:47.370 fused_ordering(413) 00:17:47.370 fused_ordering(414) 00:17:47.370 fused_ordering(415) 00:17:47.370 fused_ordering(416) 00:17:47.371 fused_ordering(417) 00:17:47.371 fused_ordering(418) 00:17:47.371 fused_ordering(419) 00:17:47.371 fused_ordering(420) 00:17:47.371 fused_ordering(421) 00:17:47.371 fused_ordering(422) 00:17:47.371 fused_ordering(423) 00:17:47.371 fused_ordering(424) 00:17:47.371 fused_ordering(425) 00:17:47.371 fused_ordering(426) 00:17:47.371 fused_ordering(427) 00:17:47.371 fused_ordering(428) 00:17:47.371 fused_ordering(429) 00:17:47.371 fused_ordering(430) 00:17:47.371 fused_ordering(431) 00:17:47.371 fused_ordering(432) 00:17:47.371 fused_ordering(433) 00:17:47.371 fused_ordering(434) 00:17:47.371 fused_ordering(435) 00:17:47.371 fused_ordering(436) 00:17:47.371 fused_ordering(437) 00:17:47.371 fused_ordering(438) 00:17:47.371 fused_ordering(439) 00:17:47.371 fused_ordering(440) 00:17:47.371 fused_ordering(441) 00:17:47.371 fused_ordering(442) 00:17:47.371 fused_ordering(443) 00:17:47.371 fused_ordering(444) 00:17:47.371 fused_ordering(445) 00:17:47.371 fused_ordering(446) 00:17:47.371 fused_ordering(447) 00:17:47.371 fused_ordering(448) 00:17:47.371 fused_ordering(449) 00:17:47.371 fused_ordering(450) 00:17:47.371 fused_ordering(451) 00:17:47.371 fused_ordering(452) 00:17:47.371 fused_ordering(453) 00:17:47.371 fused_ordering(454) 00:17:47.371 fused_ordering(455) 00:17:47.371 fused_ordering(456) 00:17:47.371 fused_ordering(457) 00:17:47.371 fused_ordering(458) 00:17:47.371 fused_ordering(459) 00:17:47.371 fused_ordering(460) 00:17:47.371 fused_ordering(461) 00:17:47.371 fused_ordering(462) 00:17:47.371 fused_ordering(463) 00:17:47.371 fused_ordering(464) 00:17:47.371 fused_ordering(465) 00:17:47.371 fused_ordering(466) 00:17:47.371 fused_ordering(467) 00:17:47.371 fused_ordering(468) 00:17:47.371 fused_ordering(469) 00:17:47.371 fused_ordering(470) 00:17:47.371 fused_ordering(471) 00:17:47.371 fused_ordering(472) 00:17:47.371 fused_ordering(473) 00:17:47.371 fused_ordering(474) 00:17:47.371 fused_ordering(475) 00:17:47.371 fused_ordering(476) 00:17:47.371 fused_ordering(477) 00:17:47.371 fused_ordering(478) 00:17:47.371 fused_ordering(479) 00:17:47.371 fused_ordering(480) 00:17:47.371 fused_ordering(481) 00:17:47.371 fused_ordering(482) 00:17:47.371 fused_ordering(483) 00:17:47.371 fused_ordering(484) 00:17:47.371 fused_ordering(485) 00:17:47.371 fused_ordering(486) 00:17:47.371 fused_ordering(487) 00:17:47.371 fused_ordering(488) 00:17:47.371 fused_ordering(489) 00:17:47.371 fused_ordering(490) 00:17:47.371 fused_ordering(491) 00:17:47.371 fused_ordering(492) 00:17:47.371 fused_ordering(493) 00:17:47.371 fused_ordering(494) 00:17:47.371 fused_ordering(495) 00:17:47.371 fused_ordering(496) 00:17:47.371 fused_ordering(497) 00:17:47.371 fused_ordering(498) 00:17:47.371 fused_ordering(499) 00:17:47.371 fused_ordering(500) 00:17:47.371 fused_ordering(501) 00:17:47.371 fused_ordering(502) 00:17:47.371 fused_ordering(503) 00:17:47.371 fused_ordering(504) 00:17:47.371 fused_ordering(505) 00:17:47.371 fused_ordering(506) 00:17:47.371 fused_ordering(507) 00:17:47.371 fused_ordering(508) 00:17:47.371 fused_ordering(509) 00:17:47.371 fused_ordering(510) 00:17:47.371 fused_ordering(511) 00:17:47.371 fused_ordering(512) 00:17:47.371 fused_ordering(513) 00:17:47.371 fused_ordering(514) 00:17:47.371 fused_ordering(515) 00:17:47.371 fused_ordering(516) 00:17:47.371 fused_ordering(517) 00:17:47.371 fused_ordering(518) 00:17:47.371 fused_ordering(519) 00:17:47.371 fused_ordering(520) 00:17:47.371 fused_ordering(521) 00:17:47.371 fused_ordering(522) 00:17:47.371 fused_ordering(523) 00:17:47.371 fused_ordering(524) 00:17:47.371 fused_ordering(525) 00:17:47.371 fused_ordering(526) 00:17:47.371 fused_ordering(527) 00:17:47.371 fused_ordering(528) 00:17:47.371 fused_ordering(529) 00:17:47.371 fused_ordering(530) 00:17:47.371 fused_ordering(531) 00:17:47.371 fused_ordering(532) 00:17:47.371 fused_ordering(533) 00:17:47.371 fused_ordering(534) 00:17:47.371 fused_ordering(535) 00:17:47.371 fused_ordering(536) 00:17:47.371 fused_ordering(537) 00:17:47.371 fused_ordering(538) 00:17:47.371 fused_ordering(539) 00:17:47.371 fused_ordering(540) 00:17:47.371 fused_ordering(541) 00:17:47.371 fused_ordering(542) 00:17:47.371 fused_ordering(543) 00:17:47.371 fused_ordering(544) 00:17:47.371 fused_ordering(545) 00:17:47.371 fused_ordering(546) 00:17:47.371 fused_ordering(547) 00:17:47.371 fused_ordering(548) 00:17:47.371 fused_ordering(549) 00:17:47.371 fused_ordering(550) 00:17:47.371 fused_ordering(551) 00:17:47.371 fused_ordering(552) 00:17:47.371 fused_ordering(553) 00:17:47.371 fused_ordering(554) 00:17:47.371 fused_ordering(555) 00:17:47.371 fused_ordering(556) 00:17:47.371 fused_ordering(557) 00:17:47.371 fused_ordering(558) 00:17:47.371 fused_ordering(559) 00:17:47.371 fused_ordering(560) 00:17:47.371 fused_ordering(561) 00:17:47.371 fused_ordering(562) 00:17:47.371 fused_ordering(563) 00:17:47.371 fused_ordering(564) 00:17:47.371 fused_ordering(565) 00:17:47.371 fused_ordering(566) 00:17:47.371 fused_ordering(567) 00:17:47.371 fused_ordering(568) 00:17:47.371 fused_ordering(569) 00:17:47.371 fused_ordering(570) 00:17:47.371 fused_ordering(571) 00:17:47.371 fused_ordering(572) 00:17:47.371 fused_ordering(573) 00:17:47.371 fused_ordering(574) 00:17:47.371 fused_ordering(575) 00:17:47.371 fused_ordering(576) 00:17:47.371 fused_ordering(577) 00:17:47.371 fused_ordering(578) 00:17:47.371 fused_ordering(579) 00:17:47.371 fused_ordering(580) 00:17:47.371 fused_ordering(581) 00:17:47.371 fused_ordering(582) 00:17:47.371 fused_ordering(583) 00:17:47.371 fused_ordering(584) 00:17:47.371 fused_ordering(585) 00:17:47.371 fused_ordering(586) 00:17:47.371 fused_ordering(587) 00:17:47.371 fused_ordering(588) 00:17:47.371 fused_ordering(589) 00:17:47.371 fused_ordering(590) 00:17:47.371 fused_ordering(591) 00:17:47.371 fused_ordering(592) 00:17:47.371 fused_ordering(593) 00:17:47.371 fused_ordering(594) 00:17:47.371 fused_ordering(595) 00:17:47.371 fused_ordering(596) 00:17:47.371 fused_ordering(597) 00:17:47.371 fused_ordering(598) 00:17:47.371 fused_ordering(599) 00:17:47.371 fused_ordering(600) 00:17:47.371 fused_ordering(601) 00:17:47.371 fused_ordering(602) 00:17:47.371 fused_ordering(603) 00:17:47.371 fused_ordering(604) 00:17:47.371 fused_ordering(605) 00:17:47.371 fused_ordering(606) 00:17:47.371 fused_ordering(607) 00:17:47.371 fused_ordering(608) 00:17:47.371 fused_ordering(609) 00:17:47.371 fused_ordering(610) 00:17:47.371 fused_ordering(611) 00:17:47.371 fused_ordering(612) 00:17:47.371 fused_ordering(613) 00:17:47.371 fused_ordering(614) 00:17:47.371 fused_ordering(615) 00:17:47.939 fused_ordering(616) 00:17:47.939 fused_ordering(617) 00:17:47.939 fused_ordering(618) 00:17:47.939 fused_ordering(619) 00:17:47.939 fused_ordering(620) 00:17:47.939 fused_ordering(621) 00:17:47.939 fused_ordering(622) 00:17:47.939 fused_ordering(623) 00:17:47.939 fused_ordering(624) 00:17:47.939 fused_ordering(625) 00:17:47.939 fused_ordering(626) 00:17:47.939 fused_ordering(627) 00:17:47.939 fused_ordering(628) 00:17:47.939 fused_ordering(629) 00:17:47.939 fused_ordering(630) 00:17:47.939 fused_ordering(631) 00:17:47.939 fused_ordering(632) 00:17:47.939 fused_ordering(633) 00:17:47.939 fused_ordering(634) 00:17:47.939 fused_ordering(635) 00:17:47.939 fused_ordering(636) 00:17:47.939 fused_ordering(637) 00:17:47.939 fused_ordering(638) 00:17:47.939 fused_ordering(639) 00:17:47.939 fused_ordering(640) 00:17:47.939 fused_ordering(641) 00:17:47.939 fused_ordering(642) 00:17:47.939 fused_ordering(643) 00:17:47.939 fused_ordering(644) 00:17:47.939 fused_ordering(645) 00:17:47.939 fused_ordering(646) 00:17:47.939 fused_ordering(647) 00:17:47.939 fused_ordering(648) 00:17:47.939 fused_ordering(649) 00:17:47.939 fused_ordering(650) 00:17:47.939 fused_ordering(651) 00:17:47.939 fused_ordering(652) 00:17:47.939 fused_ordering(653) 00:17:47.939 fused_ordering(654) 00:17:47.939 fused_ordering(655) 00:17:47.939 fused_ordering(656) 00:17:47.939 fused_ordering(657) 00:17:47.939 fused_ordering(658) 00:17:47.939 fused_ordering(659) 00:17:47.939 fused_ordering(660) 00:17:47.939 fused_ordering(661) 00:17:47.939 fused_ordering(662) 00:17:47.939 fused_ordering(663) 00:17:47.939 fused_ordering(664) 00:17:47.939 fused_ordering(665) 00:17:47.939 fused_ordering(666) 00:17:47.939 fused_ordering(667) 00:17:47.939 fused_ordering(668) 00:17:47.939 fused_ordering(669) 00:17:47.939 fused_ordering(670) 00:17:47.939 fused_ordering(671) 00:17:47.939 fused_ordering(672) 00:17:47.939 fused_ordering(673) 00:17:47.939 fused_ordering(674) 00:17:47.939 fused_ordering(675) 00:17:47.939 fused_ordering(676) 00:17:47.939 fused_ordering(677) 00:17:47.939 fused_ordering(678) 00:17:47.939 fused_ordering(679) 00:17:47.939 fused_ordering(680) 00:17:47.939 fused_ordering(681) 00:17:47.939 fused_ordering(682) 00:17:47.939 fused_ordering(683) 00:17:47.939 fused_ordering(684) 00:17:47.939 fused_ordering(685) 00:17:47.939 fused_ordering(686) 00:17:47.939 fused_ordering(687) 00:17:47.939 fused_ordering(688) 00:17:47.939 fused_ordering(689) 00:17:47.939 fused_ordering(690) 00:17:47.939 fused_ordering(691) 00:17:47.939 fused_ordering(692) 00:17:47.939 fused_ordering(693) 00:17:47.939 fused_ordering(694) 00:17:47.939 fused_ordering(695) 00:17:47.939 fused_ordering(696) 00:17:47.939 fused_ordering(697) 00:17:47.939 fused_ordering(698) 00:17:47.939 fused_ordering(699) 00:17:47.939 fused_ordering(700) 00:17:47.939 fused_ordering(701) 00:17:47.939 fused_ordering(702) 00:17:47.939 fused_ordering(703) 00:17:47.939 fused_ordering(704) 00:17:47.939 fused_ordering(705) 00:17:47.939 fused_ordering(706) 00:17:47.939 fused_ordering(707) 00:17:47.939 fused_ordering(708) 00:17:47.939 fused_ordering(709) 00:17:47.939 fused_ordering(710) 00:17:47.939 fused_ordering(711) 00:17:47.939 fused_ordering(712) 00:17:47.939 fused_ordering(713) 00:17:47.939 fused_ordering(714) 00:17:47.939 fused_ordering(715) 00:17:47.939 fused_ordering(716) 00:17:47.939 fused_ordering(717) 00:17:47.939 fused_ordering(718) 00:17:47.939 fused_ordering(719) 00:17:47.939 fused_ordering(720) 00:17:47.939 fused_ordering(721) 00:17:47.939 fused_ordering(722) 00:17:47.939 fused_ordering(723) 00:17:47.939 fused_ordering(724) 00:17:47.939 fused_ordering(725) 00:17:47.939 fused_ordering(726) 00:17:47.939 fused_ordering(727) 00:17:47.939 fused_ordering(728) 00:17:47.939 fused_ordering(729) 00:17:47.939 fused_ordering(730) 00:17:47.939 fused_ordering(731) 00:17:47.939 fused_ordering(732) 00:17:47.939 fused_ordering(733) 00:17:47.939 fused_ordering(734) 00:17:47.939 fused_ordering(735) 00:17:47.939 fused_ordering(736) 00:17:47.939 fused_ordering(737) 00:17:47.939 fused_ordering(738) 00:17:47.939 fused_ordering(739) 00:17:47.939 fused_ordering(740) 00:17:47.939 fused_ordering(741) 00:17:47.939 fused_ordering(742) 00:17:47.939 fused_ordering(743) 00:17:47.939 fused_ordering(744) 00:17:47.939 fused_ordering(745) 00:17:47.939 fused_ordering(746) 00:17:47.939 fused_ordering(747) 00:17:47.939 fused_ordering(748) 00:17:47.939 fused_ordering(749) 00:17:47.939 fused_ordering(750) 00:17:47.939 fused_ordering(751) 00:17:47.939 fused_ordering(752) 00:17:47.939 fused_ordering(753) 00:17:47.939 fused_ordering(754) 00:17:47.939 fused_ordering(755) 00:17:47.939 fused_ordering(756) 00:17:47.939 fused_ordering(757) 00:17:47.939 fused_ordering(758) 00:17:47.939 fused_ordering(759) 00:17:47.939 fused_ordering(760) 00:17:47.939 fused_ordering(761) 00:17:47.939 fused_ordering(762) 00:17:47.939 fused_ordering(763) 00:17:47.939 fused_ordering(764) 00:17:47.939 fused_ordering(765) 00:17:47.939 fused_ordering(766) 00:17:47.939 fused_ordering(767) 00:17:47.939 fused_ordering(768) 00:17:47.939 fused_ordering(769) 00:17:47.939 fused_ordering(770) 00:17:47.939 fused_ordering(771) 00:17:47.939 fused_ordering(772) 00:17:47.939 fused_ordering(773) 00:17:47.939 fused_ordering(774) 00:17:47.939 fused_ordering(775) 00:17:47.939 fused_ordering(776) 00:17:47.939 fused_ordering(777) 00:17:47.939 fused_ordering(778) 00:17:47.939 fused_ordering(779) 00:17:47.939 fused_ordering(780) 00:17:47.939 fused_ordering(781) 00:17:47.939 fused_ordering(782) 00:17:47.939 fused_ordering(783) 00:17:47.939 fused_ordering(784) 00:17:47.939 fused_ordering(785) 00:17:47.939 fused_ordering(786) 00:17:47.939 fused_ordering(787) 00:17:47.939 fused_ordering(788) 00:17:47.939 fused_ordering(789) 00:17:47.939 fused_ordering(790) 00:17:47.939 fused_ordering(791) 00:17:47.939 fused_ordering(792) 00:17:47.939 fused_ordering(793) 00:17:47.939 fused_ordering(794) 00:17:47.939 fused_ordering(795) 00:17:47.939 fused_ordering(796) 00:17:47.939 fused_ordering(797) 00:17:47.939 fused_ordering(798) 00:17:47.939 fused_ordering(799) 00:17:47.939 fused_ordering(800) 00:17:47.939 fused_ordering(801) 00:17:47.939 fused_ordering(802) 00:17:47.939 fused_ordering(803) 00:17:47.939 fused_ordering(804) 00:17:47.939 fused_ordering(805) 00:17:47.939 fused_ordering(806) 00:17:47.939 fused_ordering(807) 00:17:47.939 fused_ordering(808) 00:17:47.939 fused_ordering(809) 00:17:47.939 fused_ordering(810) 00:17:47.939 fused_ordering(811) 00:17:47.939 fused_ordering(812) 00:17:47.940 fused_ordering(813) 00:17:47.940 fused_ordering(814) 00:17:47.940 fused_ordering(815) 00:17:47.940 fused_ordering(816) 00:17:47.940 fused_ordering(817) 00:17:47.940 fused_ordering(818) 00:17:47.940 fused_ordering(819) 00:17:47.940 fused_ordering(820) 00:17:48.877 fused_ordering(821) 00:17:48.877 fused_ordering(822) 00:17:48.877 fused_ordering(823) 00:17:48.877 fused_ordering(824) 00:17:48.877 fused_ordering(825) 00:17:48.877 fused_ordering(826) 00:17:48.877 fused_ordering(827) 00:17:48.877 fused_ordering(828) 00:17:48.877 fused_ordering(829) 00:17:48.877 fused_ordering(830) 00:17:48.877 fused_ordering(831) 00:17:48.877 fused_ordering(832) 00:17:48.877 fused_ordering(833) 00:17:48.877 fused_ordering(834) 00:17:48.877 fused_ordering(835) 00:17:48.877 fused_ordering(836) 00:17:48.877 fused_ordering(837) 00:17:48.877 fused_ordering(838) 00:17:48.877 fused_ordering(839) 00:17:48.877 fused_ordering(840) 00:17:48.877 fused_ordering(841) 00:17:48.877 fused_ordering(842) 00:17:48.877 fused_ordering(843) 00:17:48.877 fused_ordering(844) 00:17:48.877 fused_ordering(845) 00:17:48.877 fused_ordering(846) 00:17:48.877 fused_ordering(847) 00:17:48.877 fused_ordering(848) 00:17:48.877 fused_ordering(849) 00:17:48.877 fused_ordering(850) 00:17:48.877 fused_ordering(851) 00:17:48.877 fused_ordering(852) 00:17:48.877 fused_ordering(853) 00:17:48.877 fused_ordering(854) 00:17:48.877 fused_ordering(855) 00:17:48.877 fused_ordering(856) 00:17:48.877 fused_ordering(857) 00:17:48.877 fused_ordering(858) 00:17:48.877 fused_ordering(859) 00:17:48.877 fused_ordering(860) 00:17:48.877 fused_ordering(861) 00:17:48.877 fused_ordering(862) 00:17:48.877 fused_ordering(863) 00:17:48.877 fused_ordering(864) 00:17:48.877 fused_ordering(865) 00:17:48.877 fused_ordering(866) 00:17:48.877 fused_ordering(867) 00:17:48.877 fused_ordering(868) 00:17:48.877 fused_ordering(869) 00:17:48.877 fused_ordering(870) 00:17:48.877 fused_ordering(871) 00:17:48.877 fused_ordering(872) 00:17:48.877 fused_ordering(873) 00:17:48.877 fused_ordering(874) 00:17:48.877 fused_ordering(875) 00:17:48.877 fused_ordering(876) 00:17:48.877 fused_ordering(877) 00:17:48.877 fused_ordering(878) 00:17:48.877 fused_ordering(879) 00:17:48.877 fused_ordering(880) 00:17:48.877 fused_ordering(881) 00:17:48.877 fused_ordering(882) 00:17:48.878 fused_ordering(883) 00:17:48.878 fused_ordering(884) 00:17:48.878 fused_ordering(885) 00:17:48.878 fused_ordering(886) 00:17:48.878 fused_ordering(887) 00:17:48.878 fused_ordering(888) 00:17:48.878 fused_ordering(889) 00:17:48.878 fused_ordering(890) 00:17:48.878 fused_ordering(891) 00:17:48.878 fused_ordering(892) 00:17:48.878 fused_ordering(893) 00:17:48.878 fused_ordering(894) 00:17:48.878 fused_ordering(895) 00:17:48.878 fused_ordering(896) 00:17:48.878 fused_ordering(897) 00:17:48.878 fused_ordering(898) 00:17:48.878 fused_ordering(899) 00:17:48.878 fused_ordering(900) 00:17:48.878 fused_ordering(901) 00:17:48.878 fused_ordering(902) 00:17:48.878 fused_ordering(903) 00:17:48.878 fused_ordering(904) 00:17:48.878 fused_ordering(905) 00:17:48.878 fused_ordering(906) 00:17:48.878 fused_ordering(907) 00:17:48.878 fused_ordering(908) 00:17:48.878 fused_ordering(909) 00:17:48.878 fused_ordering(910) 00:17:48.878 fused_ordering(911) 00:17:48.878 fused_ordering(912) 00:17:48.878 fused_ordering(913) 00:17:48.878 fused_ordering(914) 00:17:48.878 fused_ordering(915) 00:17:48.878 fused_ordering(916) 00:17:48.878 fused_ordering(917) 00:17:48.878 fused_ordering(918) 00:17:48.878 fused_ordering(919) 00:17:48.878 fused_ordering(920) 00:17:48.878 fused_ordering(921) 00:17:48.878 fused_ordering(922) 00:17:48.878 fused_ordering(923) 00:17:48.878 fused_ordering(924) 00:17:48.878 fused_ordering(925) 00:17:48.878 fused_ordering(926) 00:17:48.878 fused_ordering(927) 00:17:48.878 fused_ordering(928) 00:17:48.878 fused_ordering(929) 00:17:48.878 fused_ordering(930) 00:17:48.878 fused_ordering(931) 00:17:48.878 fused_ordering(932) 00:17:48.878 fused_ordering(933) 00:17:48.878 fused_ordering(934) 00:17:48.878 fused_ordering(935) 00:17:48.878 fused_ordering(936) 00:17:48.878 fused_ordering(937) 00:17:48.878 fused_ordering(938) 00:17:48.878 fused_ordering(939) 00:17:48.878 fused_ordering(940) 00:17:48.878 fused_ordering(941) 00:17:48.878 fused_ordering(942) 00:17:48.878 fused_ordering(943) 00:17:48.878 fused_ordering(944) 00:17:48.878 fused_ordering(945) 00:17:48.878 fused_ordering(946) 00:17:48.878 fused_ordering(947) 00:17:48.878 fused_ordering(948) 00:17:48.878 fused_ordering(949) 00:17:48.878 fused_ordering(950) 00:17:48.878 fused_ordering(951) 00:17:48.878 fused_ordering(952) 00:17:48.878 fused_ordering(953) 00:17:48.878 fused_ordering(954) 00:17:48.878 fused_ordering(955) 00:17:48.878 fused_ordering(956) 00:17:48.878 fused_ordering(957) 00:17:48.878 fused_ordering(958) 00:17:48.878 fused_ordering(959) 00:17:48.878 fused_ordering(960) 00:17:48.878 fused_ordering(961) 00:17:48.878 fused_ordering(962) 00:17:48.878 fused_ordering(963) 00:17:48.878 fused_ordering(964) 00:17:48.878 fused_ordering(965) 00:17:48.878 fused_ordering(966) 00:17:48.878 fused_ordering(967) 00:17:48.878 fused_ordering(968) 00:17:48.878 fused_ordering(969) 00:17:48.878 fused_ordering(970) 00:17:48.878 fused_ordering(971) 00:17:48.878 fused_ordering(972) 00:17:48.878 fused_ordering(973) 00:17:48.878 fused_ordering(974) 00:17:48.878 fused_ordering(975) 00:17:48.878 fused_ordering(976) 00:17:48.878 fused_ordering(977) 00:17:48.878 fused_ordering(978) 00:17:48.878 fused_ordering(979) 00:17:48.878 fused_ordering(980) 00:17:48.878 fused_ordering(981) 00:17:48.878 fused_ordering(982) 00:17:48.878 fused_ordering(983) 00:17:48.878 fused_ordering(984) 00:17:48.878 fused_ordering(985) 00:17:48.878 fused_ordering(986) 00:17:48.878 fused_ordering(987) 00:17:48.878 fused_ordering(988) 00:17:48.878 fused_ordering(989) 00:17:48.878 fused_ordering(990) 00:17:48.878 fused_ordering(991) 00:17:48.878 fused_ordering(992) 00:17:48.878 fused_ordering(993) 00:17:48.878 fused_ordering(994) 00:17:48.878 fused_ordering(995) 00:17:48.878 fused_ordering(996) 00:17:48.878 fused_ordering(997) 00:17:48.878 fused_ordering(998) 00:17:48.878 fused_ordering(999) 00:17:48.878 fused_ordering(1000) 00:17:48.878 fused_ordering(1001) 00:17:48.878 fused_ordering(1002) 00:17:48.878 fused_ordering(1003) 00:17:48.878 fused_ordering(1004) 00:17:48.878 fused_ordering(1005) 00:17:48.878 fused_ordering(1006) 00:17:48.878 fused_ordering(1007) 00:17:48.878 fused_ordering(1008) 00:17:48.878 fused_ordering(1009) 00:17:48.878 fused_ordering(1010) 00:17:48.878 fused_ordering(1011) 00:17:48.878 fused_ordering(1012) 00:17:48.878 fused_ordering(1013) 00:17:48.878 fused_ordering(1014) 00:17:48.878 fused_ordering(1015) 00:17:48.878 fused_ordering(1016) 00:17:48.878 fused_ordering(1017) 00:17:48.878 fused_ordering(1018) 00:17:48.878 fused_ordering(1019) 00:17:48.878 fused_ordering(1020) 00:17:48.878 fused_ordering(1021) 00:17:48.878 fused_ordering(1022) 00:17:48.878 fused_ordering(1023) 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.878 rmmod nvme_tcp 00:17:48.878 rmmod nvme_fabrics 00:17:48.878 rmmod nvme_keyring 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1362844 ']' 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1362844 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 1362844 ']' 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 1362844 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1362844 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1362844' 00:17:48.878 killing process with pid 1362844 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 1362844 00:17:48.878 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 1362844 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.138 13:45:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.046 13:45:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:51.305 00:17:51.305 real 0m13.718s 00:17:51.305 user 0m7.438s 00:17:51.305 sys 0m7.789s 00:17:51.305 13:45:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:51.305 13:45:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 ************************************ 00:17:51.305 END TEST nvmf_fused_ordering 00:17:51.305 ************************************ 00:17:51.305 13:45:44 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:51.305 13:45:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:51.305 13:45:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:51.305 13:45:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 ************************************ 00:17:51.305 START TEST nvmf_delete_subsystem 00:17:51.305 ************************************ 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:51.305 * Looking for test storage... 00:17:51.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.305 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.306 13:45:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:59.432 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:59.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:59.432 Found net devices under 0000:af:00.0: cvl_0_0 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:59.432 Found net devices under 0000:af:00.1: cvl_0_1 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.432 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.433 13:45:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:59.433 00:17:59.433 --- 10.0.0.2 ping statistics --- 00:17:59.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.433 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:17:59.433 00:17:59.433 --- 10.0.0.1 ping statistics --- 00:17:59.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.433 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1367312 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1367312 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 1367312 ']' 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:59.433 13:45:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 [2024-06-11 13:45:51.291280] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:59.433 [2024-06-11 13:45:51.291338] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.433 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.433 [2024-06-11 13:45:51.398467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.433 [2024-06-11 13:45:51.481276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.433 [2024-06-11 13:45:51.481325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.433 [2024-06-11 13:45:51.481340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.433 [2024-06-11 13:45:51.481353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.433 [2024-06-11 13:45:51.481365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.433 [2024-06-11 13:45:51.481425] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.433 [2024-06-11 13:45:51.481430] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 [2024-06-11 13:45:52.242210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 [2024-06-11 13:45:52.262466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 NULL1 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 Delay0 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1367370 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:59.433 13:45:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:59.433 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.692 [2024-06-11 13:45:52.343409] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:01.598 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.598 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.598 13:45:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 starting I/O failed: -6 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 starting I/O failed: -6 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 starting I/O failed: -6 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 starting I/O failed: -6 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 Write completed with error (sct=0, sc=8) 00:18:01.598 Read completed with error (sct=0, sc=8) 00:18:01.598 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 [2024-06-11 13:45:54.432928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59b250 is same with the state(5) to be set 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 starting I/O failed: -6 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 [2024-06-11 13:45:54.433604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63c000d450 is same with the state(5) to be set 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Read completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:01.599 Write completed with error (sct=0, sc=8) 00:18:02.537 [2024-06-11 13:45:55.398825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bc1a0 is same with the state(5) to be set 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 [2024-06-11 13:45:55.431967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59b070 is same with the state(5) to be set 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 Write completed with error (sct=0, sc=8) 00:18:02.537 [2024-06-11 13:45:55.435493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63c000d760 is same with the state(5) to be set 00:18:02.537 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 [2024-06-11 13:45:55.436301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f63c000cfe0 is same with the state(5) to be set 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Write completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 Read completed with error (sct=0, sc=8) 00:18:02.538 [2024-06-11 13:45:55.436430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bcc30 is same with the state(5) to be set 00:18:02.538 Initializing NVMe Controllers 00:18:02.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.538 Controller IO queue size 128, less than required. 00:18:02.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:02.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:02.538 Initialization complete. Launching workers. 00:18:02.538 ======================================================== 00:18:02.538 Latency(us) 00:18:02.538 Device Information : IOPS MiB/s Average min max 00:18:02.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.81 0.08 889905.82 480.49 1011182.53 00:18:02.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.87 0.08 910725.73 339.67 1011840.61 00:18:02.538 ======================================================== 00:18:02.538 Total : 335.68 0.16 900007.78 339.67 1011840.61 00:18:02.538 00:18:02.538 [2024-06-11 13:45:55.437144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bc1a0 (9): Bad file descriptor 00:18:02.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:02.538 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.538 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:18:02.538 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1367370 00:18:02.538 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1367370 00:18:03.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1367370) - No such process 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1367370 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1367370 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1367370 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:03.107 [2024-06-11 13:45:55.965796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1368147 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:03.107 13:45:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:03.107 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.366 [2024-06-11 13:45:56.036560] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:03.624 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:03.624 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:03.624 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:04.192 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:04.192 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:04.192 13:45:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:04.760 13:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:04.761 13:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:04.761 13:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:05.328 13:45:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:05.329 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:05.329 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:05.897 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:05.897 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:05.897 13:45:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:06.156 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:06.156 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:06.156 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:06.415 Initializing NVMe Controllers 00:18:06.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.415 Controller IO queue size 128, less than required. 00:18:06.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:06.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:06.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:06.415 Initialization complete. Launching workers. 00:18:06.415 ======================================================== 00:18:06.415 Latency(us) 00:18:06.415 Device Information : IOPS MiB/s Average min max 00:18:06.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002648.21 1000209.39 1011867.28 00:18:06.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004863.42 1000373.42 1041024.06 00:18:06.415 ======================================================== 00:18:06.415 Total : 256.00 0.12 1003755.82 1000209.39 1041024.06 00:18:06.415 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1368147 00:18:06.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1368147) - No such process 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1368147 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.677 rmmod nvme_tcp 00:18:06.677 rmmod nvme_fabrics 00:18:06.677 rmmod nvme_keyring 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1367312 ']' 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1367312 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 1367312 ']' 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 1367312 00:18:06.677 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1367312 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1367312' 00:18:06.939 killing process with pid 1367312 00:18:06.939 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 1367312 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 1367312 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.940 13:45:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.476 13:46:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.477 00:18:09.477 real 0m17.880s 00:18:09.477 user 0m30.036s 00:18:09.477 sys 0m7.182s 00:18:09.477 13:46:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:09.477 13:46:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:09.477 ************************************ 00:18:09.477 END TEST nvmf_delete_subsystem 00:18:09.477 ************************************ 00:18:09.477 13:46:01 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.477 13:46:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:09.477 13:46:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:09.477 13:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.477 ************************************ 00:18:09.477 START TEST nvmf_ns_masking 00:18:09.477 ************************************ 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:09.477 * Looking for test storage... 00:18:09.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=c1940cc3-baf3-4316-92f9-39567a7e97e5 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.477 13:46:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:16.109 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:16.109 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:16.109 Found net devices under 0000:af:00.0: cvl_0_0 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:16.109 Found net devices under 0000:af:00.1: cvl_0_1 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:18:16.109 00:18:16.109 --- 10.0.0.2 ping statistics --- 00:18:16.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.109 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:16.109 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:18:16.110 00:18:16.110 --- 10.0.0.1 ping statistics --- 00:18:16.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.110 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1372394 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1372394 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1372394 ']' 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:16.110 13:46:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:16.110 [2024-06-11 13:46:09.011399] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:16.110 [2024-06-11 13:46:09.011458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.369 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.369 [2024-06-11 13:46:09.120695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.369 [2024-06-11 13:46:09.207247] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.369 [2024-06-11 13:46:09.207293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.369 [2024-06-11 13:46:09.207306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.369 [2024-06-11 13:46:09.207318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.369 [2024-06-11 13:46:09.207328] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.369 [2024-06-11 13:46:09.207401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.369 [2024-06-11 13:46:09.208493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.369 [2024-06-11 13:46:09.208551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.369 [2024-06-11 13:46:09.208551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.306 13:46:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.306 [2024-06-11 13:46:10.178825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.306 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:18:17.306 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:18:17.306 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:17.564 Malloc1 00:18:17.564 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:17.822 Malloc2 00:18:17.822 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:18.080 13:46:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:18.338 13:46:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.338 [2024-06-11 13:46:11.156846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.338 13:46:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:18:18.338 13:46:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c1940cc3-baf3-4316-92f9-39567a7e97e5 -a 10.0.0.2 -s 4420 -i 4 00:18:18.596 13:46:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.596 13:46:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:18.596 13:46:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.596 13:46:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:18.596 13:46:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:20.492 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:20.750 [ 0]:0x1 00:18:20.750 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:20.750 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:20.750 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b5f375ae6a9c44368dca06ca4392169e 00:18:20.750 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b5f375ae6a9c44368dca06ca4392169e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:20.750 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:21.008 [ 0]:0x1 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b5f375ae6a9c44368dca06ca4392169e 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b5f375ae6a9c44368dca06ca4392169e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:21.008 [ 1]:0x2 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.008 13:46:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.266 13:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:21.524 13:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:18:21.524 13:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c1940cc3-baf3-4316-92f9-39567a7e97e5 -a 10.0.0.2 -s 4420 -i 4 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:18:21.782 13:46:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.679 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:23.937 [ 0]:0x2 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.937 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:24.195 [ 0]:0x1 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b5f375ae6a9c44368dca06ca4392169e 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b5f375ae6a9c44368dca06ca4392169e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:24.195 [ 1]:0x2 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.195 13:46:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:24.453 [ 0]:0x2 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:18:24.453 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.711 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.711 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:18:24.711 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c1940cc3-baf3-4316-92f9-39567a7e97e5 -a 10.0.0.2 -s 4420 -i 4 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:18:24.969 13:46:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:27.496 [ 0]:0x1 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b5f375ae6a9c44368dca06ca4392169e 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b5f375ae6a9c44368dca06ca4392169e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:27.496 [ 1]:0x2 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.496 13:46:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:27.496 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:27.497 [ 0]:0x2 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:27.497 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:27.755 [2024-06-11 13:46:20.541511] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:27.755 request: 00:18:27.755 { 00:18:27.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.755 "nsid": 2, 00:18:27.755 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.755 "method": "nvmf_ns_remove_host", 00:18:27.755 "req_id": 1 00:18:27.755 } 00:18:27.755 Got JSON-RPC error response 00:18:27.755 response: 00:18:27.755 { 00:18:27.755 "code": -32602, 00:18:27.755 "message": "Invalid parameters" 00:18:27.755 } 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:27.755 [ 0]:0x2 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:27.755 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:28.012 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22a12b1bea8448f2ac34804893a79ea2 00:18:28.012 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22a12b1bea8448f2ac34804893a79ea2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:28.012 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:18:28.012 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.012 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.271 13:46:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.271 rmmod nvme_tcp 00:18:28.271 rmmod nvme_fabrics 00:18:28.271 rmmod nvme_keyring 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1372394 ']' 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1372394 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1372394 ']' 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1372394 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1372394 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1372394' 00:18:28.271 killing process with pid 1372394 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1372394 00:18:28.271 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1372394 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.531 13:46:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.067 13:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.067 00:18:31.067 real 0m21.414s 00:18:31.067 user 0m52.368s 00:18:31.067 sys 0m8.025s 00:18:31.067 13:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:31.067 13:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.067 ************************************ 00:18:31.067 END TEST nvmf_ns_masking 00:18:31.067 ************************************ 00:18:31.067 13:46:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:18:31.067 13:46:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:31.067 13:46:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:31.067 13:46:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:31.067 13:46:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.067 ************************************ 00:18:31.067 START TEST nvmf_nvme_cli 00:18:31.067 ************************************ 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:31.067 * Looking for test storage... 00:18:31.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:31.067 13:46:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.068 13:46:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:37.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:37.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.639 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:37.640 Found net devices under 0000:af:00.0: cvl_0_0 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:37.640 Found net devices under 0000:af:00.1: cvl_0_1 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.640 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:18:37.900 00:18:37.900 --- 10.0.0.2 ping statistics --- 00:18:37.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.900 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:18:37.900 00:18:37.900 --- 10.0.0.1 ping statistics --- 00:18:37.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.900 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1378339 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1378339 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1378339 ']' 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:37.900 13:46:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 [2024-06-11 13:46:30.817279] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:38.161 [2024-06-11 13:46:30.817329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.161 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.161 [2024-06-11 13:46:30.912521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.161 [2024-06-11 13:46:30.995245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.161 [2024-06-11 13:46:30.995291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.161 [2024-06-11 13:46:30.995304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.161 [2024-06-11 13:46:30.995316] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.161 [2024-06-11 13:46:30.995326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.161 [2024-06-11 13:46:30.995434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.161 [2024-06-11 13:46:30.995536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.161 [2024-06-11 13:46:30.995593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.161 [2024-06-11 13:46:30.995594] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.161 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 [2024-06-11 13:46:31.782864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 Malloc0 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 Malloc1 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 [2024-06-11 13:46:31.868998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.162 13:46:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:18:39.162 00:18:39.162 Discovery Log Number of Records 2, Generation counter 2 00:18:39.162 =====Discovery Log Entry 0====== 00:18:39.162 trtype: tcp 00:18:39.162 adrfam: ipv4 00:18:39.162 subtype: current discovery subsystem 00:18:39.162 treq: not required 00:18:39.162 portid: 0 00:18:39.162 trsvcid: 4420 00:18:39.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:39.162 traddr: 10.0.0.2 00:18:39.162 eflags: explicit discovery connections, duplicate discovery information 00:18:39.162 sectype: none 00:18:39.162 =====Discovery Log Entry 1====== 00:18:39.162 trtype: tcp 00:18:39.162 adrfam: ipv4 00:18:39.162 subtype: nvme subsystem 00:18:39.162 treq: not required 00:18:39.162 portid: 0 00:18:39.162 trsvcid: 4420 00:18:39.162 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:39.162 traddr: 10.0.0.2 00:18:39.162 eflags: none 00:18:39.162 sectype: none 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:39.162 13:46:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:18:40.541 13:46:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.441 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:42.699 /dev/nvme0n1 ]] 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.699 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:42.957 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.216 13:46:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.216 rmmod nvme_tcp 00:18:43.216 rmmod nvme_fabrics 00:18:43.216 rmmod nvme_keyring 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1378339 ']' 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1378339 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1378339 ']' 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1378339 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1378339 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1378339' 00:18:43.216 killing process with pid 1378339 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1378339 00:18:43.216 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1378339 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.475 13:46:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.012 13:46:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.012 00:18:46.012 real 0m14.888s 00:18:46.012 user 0m22.923s 00:18:46.012 sys 0m6.427s 00:18:46.012 13:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:46.012 13:46:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.012 ************************************ 00:18:46.012 END TEST nvmf_nvme_cli 00:18:46.012 ************************************ 00:18:46.012 13:46:38 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:18:46.012 13:46:38 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:46.012 13:46:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:46.012 13:46:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:46.012 13:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.012 ************************************ 00:18:46.012 START TEST nvmf_host_management 00:18:46.012 ************************************ 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:46.012 * Looking for test storage... 00:18:46.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.012 13:46:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:52.581 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:52.581 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:52.581 Found net devices under 0000:af:00.0: cvl_0_0 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:52.581 Found net devices under 0000:af:00.1: cvl_0_1 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.581 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.582 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:18:52.841 00:18:52.841 --- 10.0.0.2 ping statistics --- 00:18:52.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.841 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:18:52.841 00:18:52.841 --- 10.0.0.1 ping statistics --- 00:18:52.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.841 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1382906 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1382906 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1382906 ']' 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:52.841 13:46:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:53.100 [2024-06-11 13:46:45.777026] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:53.101 [2024-06-11 13:46:45.777087] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.101 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.101 [2024-06-11 13:46:45.877023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.101 [2024-06-11 13:46:45.960851] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.101 [2024-06-11 13:46:45.960891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.101 [2024-06-11 13:46:45.960904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.101 [2024-06-11 13:46:45.960916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.101 [2024-06-11 13:46:45.960926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.101 [2024-06-11 13:46:45.961029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.101 [2024-06-11 13:46:45.961146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.101 [2024-06-11 13:46:45.961236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.101 [2024-06-11 13:46:45.961236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 [2024-06-11 13:46:46.746896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 Malloc0 00:18:54.038 [2024-06-11 13:46:46.814962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1383210 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1383210 /var/tmp/bdevperf.sock 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1383210 ']' 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:54.038 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:54.038 { 00:18:54.038 "params": { 00:18:54.038 "name": "Nvme$subsystem", 00:18:54.038 "trtype": "$TEST_TRANSPORT", 00:18:54.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.038 "adrfam": "ipv4", 00:18:54.038 "trsvcid": "$NVMF_PORT", 00:18:54.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.038 "hdgst": ${hdgst:-false}, 00:18:54.039 "ddgst": ${ddgst:-false} 00:18:54.039 }, 00:18:54.039 "method": "bdev_nvme_attach_controller" 00:18:54.039 } 00:18:54.039 EOF 00:18:54.039 )") 00:18:54.039 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:18:54.039 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:18:54.039 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:18:54.039 13:46:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:54.039 "params": { 00:18:54.039 "name": "Nvme0", 00:18:54.039 "trtype": "tcp", 00:18:54.039 "traddr": "10.0.0.2", 00:18:54.039 "adrfam": "ipv4", 00:18:54.039 "trsvcid": "4420", 00:18:54.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:54.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:54.039 "hdgst": false, 00:18:54.039 "ddgst": false 00:18:54.039 }, 00:18:54.039 "method": "bdev_nvme_attach_controller" 00:18:54.039 }' 00:18:54.039 [2024-06-11 13:46:46.921866] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:54.039 [2024-06-11 13:46:46.921934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383210 ] 00:18:54.298 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.298 [2024-06-11 13:46:47.023354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.298 [2024-06-11 13:46:47.104421] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.556 Running I/O for 10 seconds... 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:55.124 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.125 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:55.125 [2024-06-11 13:46:47.849412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.125 [2024-06-11 13:46:47.849459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.125 [2024-06-11 13:46:47.849496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.125 [2024-06-11 13:46:47.849522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.125 [2024-06-11 13:46:47.849549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5e400 is same with the state(5) to be set 00:18:55.125 [2024-06-11 13:46:47.849622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.849984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.849999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.125 [2024-06-11 13:46:47.850336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.125 [2024-06-11 13:46:47.850349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.850977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.126 [2024-06-11 13:46:47.851356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.126 [2024-06-11 13:46:47.851377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.126 [2024-06-11 13:46:47.851393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.127 [2024-06-11 13:46:47.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.127 [2024-06-11 13:46:47.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.127 [2024-06-11 13:46:47.851435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.127 [2024-06-11 13:46:47.851449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116f1f0 is same with the state(5) to be set 00:18:55.127 [2024-06-11 13:46:47.851513] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x116f1f0 was disconnected and freed. reset controller. 00:18:55.127 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:55.127 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.127 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:18:55.127 [2024-06-11 13:46:47.852756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:55.127 task offset: 98176 on job bdev=Nvme0n1 fails 00:18:55.127 00:18:55.127 Latency(us) 00:18:55.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.127 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:55.127 Job: Nvme0n1 ended in about 0.58 seconds with error 00:18:55.127 Verification LBA range: start 0x0 length 0x400 00:18:55.127 Nvme0n1 : 0.58 1209.89 75.62 109.99 0.00 47309.23 6212.81 41733.32 00:18:55.127 =================================================================================================================== 00:18:55.127 Total : 1209.89 75.62 109.99 0.00 47309.23 6212.81 41733.32 00:18:55.127 [2024-06-11 13:46:47.854823] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:55.127 [2024-06-11 13:46:47.854844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5e400 (9): Bad file descriptor 00:18:55.127 [2024-06-11 13:46:47.858078] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:18:55.127 [2024-06-11 13:46:47.858285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:55.127 [2024-06-11 13:46:47.858317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.127 [2024-06-11 13:46:47.858339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:18:55.127 [2024-06-11 13:46:47.858353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:18:55.127 [2024-06-11 13:46:47.858366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:18:55.127 [2024-06-11 13:46:47.858379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd5e400 00:18:55.127 [2024-06-11 13:46:47.858406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5e400 (9): Bad file descriptor 00:18:55.127 [2024-06-11 13:46:47.858425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:55.127 [2024-06-11 13:46:47.858437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:55.127 [2024-06-11 13:46:47.858451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:55.127 [2024-06-11 13:46:47.858473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.127 13:46:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.127 13:46:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1383210 00:18:56.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1383210) - No such process 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:56.062 { 00:18:56.062 "params": { 00:18:56.062 "name": "Nvme$subsystem", 00:18:56.062 "trtype": "$TEST_TRANSPORT", 00:18:56.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.062 "adrfam": "ipv4", 00:18:56.062 "trsvcid": "$NVMF_PORT", 00:18:56.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.062 "hdgst": ${hdgst:-false}, 00:18:56.062 "ddgst": ${ddgst:-false} 00:18:56.062 }, 00:18:56.062 "method": "bdev_nvme_attach_controller" 00:18:56.062 } 00:18:56.062 EOF 00:18:56.062 )") 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:18:56.062 13:46:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:56.063 "params": { 00:18:56.063 "name": "Nvme0", 00:18:56.063 "trtype": "tcp", 00:18:56.063 "traddr": "10.0.0.2", 00:18:56.063 "adrfam": "ipv4", 00:18:56.063 "trsvcid": "4420", 00:18:56.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:56.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:56.063 "hdgst": false, 00:18:56.063 "ddgst": false 00:18:56.063 }, 00:18:56.063 "method": "bdev_nvme_attach_controller" 00:18:56.063 }' 00:18:56.063 [2024-06-11 13:46:48.922017] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:56.063 [2024-06-11 13:46:48.922084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383494 ] 00:18:56.063 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.321 [2024-06-11 13:46:49.023432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.321 [2024-06-11 13:46:49.103865] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.579 Running I/O for 1 seconds... 00:18:57.515 00:18:57.515 Latency(us) 00:18:57.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.515 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:57.515 Verification LBA range: start 0x0 length 0x400 00:18:57.515 Nvme0n1 : 1.02 1374.09 85.88 0.00 0.00 45666.60 7602.18 41733.32 00:18:57.515 =================================================================================================================== 00:18:57.515 Total : 1374.09 85.88 0.00 0.00 45666.60 7602.18 41733.32 00:18:57.773 13:46:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.774 rmmod nvme_tcp 00:18:57.774 rmmod nvme_fabrics 00:18:57.774 rmmod nvme_keyring 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1382906 ']' 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1382906 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1382906 ']' 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1382906 00:18:57.774 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1382906 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1382906' 00:18:58.032 killing process with pid 1382906 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1382906 00:18:58.032 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1382906 00:18:58.032 [2024-06-11 13:46:50.932715] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.290 13:46:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.192 13:46:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.192 13:46:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:00.192 00:19:00.192 real 0m14.564s 00:19:00.192 user 0m24.520s 00:19:00.192 sys 0m6.804s 00:19:00.192 13:46:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.192 13:46:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:00.192 ************************************ 00:19:00.192 END TEST nvmf_host_management 00:19:00.192 ************************************ 00:19:00.192 13:46:53 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:00.192 13:46:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:00.192 13:46:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:00.192 13:46:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.451 ************************************ 00:19:00.451 START TEST nvmf_lvol 00:19:00.451 ************************************ 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:00.451 * Looking for test storage... 00:19:00.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.451 13:46:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:07.080 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:07.080 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:07.080 Found net devices under 0000:af:00.0: cvl_0_0 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:07.080 Found net devices under 0000:af:00.1: cvl_0_1 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.080 13:46:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:19:07.338 00:19:07.338 --- 10.0.0.2 ping statistics --- 00:19:07.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.338 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:19:07.338 00:19:07.338 --- 10.0.0.1 ping statistics --- 00:19:07.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.338 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1387462 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1387462 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1387462 ']' 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:07.338 13:47:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:07.338 [2024-06-11 13:47:00.142784] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:07.338 [2024-06-11 13:47:00.142842] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.338 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.597 [2024-06-11 13:47:00.250603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.597 [2024-06-11 13:47:00.331390] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.597 [2024-06-11 13:47:00.331438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.597 [2024-06-11 13:47:00.331451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.597 [2024-06-11 13:47:00.331464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.597 [2024-06-11 13:47:00.331474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.597 [2024-06-11 13:47:00.331543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.597 [2024-06-11 13:47:00.331660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.597 [2024-06-11 13:47:00.331664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.166 13:47:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:08.166 13:47:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:19:08.166 13:47:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.166 13:47:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:08.166 13:47:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:08.425 13:47:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.425 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:08.425 [2024-06-11 13:47:01.309588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.684 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:08.944 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:08.944 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:09.203 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:09.203 13:47:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:09.203 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:09.461 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94eea6f5-c6e0-4163-9bb9-7fd92876a221 00:19:09.461 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94eea6f5-c6e0-4163-9bb9-7fd92876a221 lvol 20 00:19:09.719 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f8a19cc9-6637-4024-9f81-ed5929728014 00:19:09.719 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:09.977 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8a19cc9-6637-4024-9f81-ed5929728014 00:19:10.236 13:47:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.495 [2024-06-11 13:47:03.206646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.495 13:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:10.754 13:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1388035 00:19:10.755 13:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:10.755 13:47:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:10.755 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.692 13:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f8a19cc9-6637-4024-9f81-ed5929728014 MY_SNAPSHOT 00:19:11.951 13:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=18d7b8fd-06e1-400b-9c4c-dd34f71716d8 00:19:11.951 13:47:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f8a19cc9-6637-4024-9f81-ed5929728014 30 00:19:12.211 13:47:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 18d7b8fd-06e1-400b-9c4c-dd34f71716d8 MY_CLONE 00:19:12.470 13:47:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8311274-bd2b-41fc-b9ba-f716620a69ce 00:19:12.470 13:47:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8311274-bd2b-41fc-b9ba-f716620a69ce 00:19:13.407 13:47:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1388035 00:19:21.573 Initializing NVMe Controllers 00:19:21.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:21.573 Controller IO queue size 128, less than required. 00:19:21.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:21.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:21.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:21.573 Initialization complete. Launching workers. 00:19:21.573 ======================================================== 00:19:21.573 Latency(us) 00:19:21.573 Device Information : IOPS MiB/s Average min max 00:19:21.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10012.10 39.11 12792.45 2191.81 63559.85 00:19:21.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9897.50 38.66 12941.83 3840.06 64545.66 00:19:21.573 ======================================================== 00:19:21.573 Total : 19909.60 77.77 12866.71 2191.81 64545.66 00:19:21.573 00:19:21.573 13:47:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:21.573 13:47:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f8a19cc9-6637-4024-9f81-ed5929728014 00:19:21.573 13:47:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94eea6f5-c6e0-4163-9bb9-7fd92876a221 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.832 rmmod nvme_tcp 00:19:21.832 rmmod nvme_fabrics 00:19:21.832 rmmod nvme_keyring 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1387462 ']' 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1387462 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1387462 ']' 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1387462 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1387462 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1387462' 00:19:21.832 killing process with pid 1387462 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1387462 00:19:21.832 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1387462 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.091 13:47:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:24.629 00:19:24.629 real 0m23.928s 00:19:24.629 user 1m5.518s 00:19:24.629 sys 0m10.204s 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:24.629 ************************************ 00:19:24.629 END TEST nvmf_lvol 00:19:24.629 ************************************ 00:19:24.629 13:47:17 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:24.629 13:47:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:24.629 13:47:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:24.629 13:47:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.629 ************************************ 00:19:24.629 START TEST nvmf_lvs_grow 00:19:24.629 ************************************ 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:24.629 * Looking for test storage... 00:19:24.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.629 13:47:17 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.630 13:47:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:31.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:31.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:31.202 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:31.203 Found net devices under 0000:af:00.0: cvl_0_0 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:31.203 Found net devices under 0000:af:00.1: cvl_0_1 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.203 13:47:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:19:31.463 00:19:31.463 --- 10.0.0.2 ping statistics --- 00:19:31.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.463 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:19:31.463 00:19:31.463 --- 10.0.0.1 ping statistics --- 00:19:31.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.463 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1393630 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1393630 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1393630 ']' 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:31.463 13:47:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:31.722 [2024-06-11 13:47:24.401265] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:31.722 [2024-06-11 13:47:24.401328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.722 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.722 [2024-06-11 13:47:24.500342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.722 [2024-06-11 13:47:24.580853] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.722 [2024-06-11 13:47:24.580901] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.722 [2024-06-11 13:47:24.580915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.723 [2024-06-11 13:47:24.580926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.723 [2024-06-11 13:47:24.580936] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.723 [2024-06-11 13:47:24.580964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.696 [2024-06-11 13:47:25.565221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:32.696 13:47:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:32.955 ************************************ 00:19:32.955 START TEST lvs_grow_clean 00:19:32.955 ************************************ 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:32.955 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:33.214 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:33.214 13:47:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:33.473 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 lvol 150 00:19:33.731 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=98e60073-3358-446a-8039-930b53ed4017 00:19:33.731 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:33.731 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:33.731 [2024-06-11 13:47:26.590209] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:33.731 [2024-06-11 13:47:26.590264] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:33.731 true 00:19:33.731 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:33.731 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:33.990 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:33.990 13:47:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:34.249 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98e60073-3358-446a-8039-930b53ed4017 00:19:34.508 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.767 [2024-06-11 13:47:27.460895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.767 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1394307 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1394307 /var/tmp/bdevperf.sock 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1394307 ']' 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:35.025 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.026 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:35.026 13:47:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:35.026 [2024-06-11 13:47:27.744896] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:35.026 [2024-06-11 13:47:27.744960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394307 ] 00:19:35.026 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.026 [2024-06-11 13:47:27.834790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.026 [2024-06-11 13:47:27.920450] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.957 13:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:35.957 13:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:19:35.958 13:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:36.216 Nvme0n1 00:19:36.216 13:47:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:36.475 [ 00:19:36.475 { 00:19:36.475 "name": "Nvme0n1", 00:19:36.475 "aliases": [ 00:19:36.475 "98e60073-3358-446a-8039-930b53ed4017" 00:19:36.475 ], 00:19:36.475 "product_name": "NVMe disk", 00:19:36.475 "block_size": 4096, 00:19:36.475 "num_blocks": 38912, 00:19:36.475 "uuid": "98e60073-3358-446a-8039-930b53ed4017", 00:19:36.475 "assigned_rate_limits": { 00:19:36.475 "rw_ios_per_sec": 0, 00:19:36.475 "rw_mbytes_per_sec": 0, 00:19:36.475 "r_mbytes_per_sec": 0, 00:19:36.475 "w_mbytes_per_sec": 0 00:19:36.475 }, 00:19:36.475 "claimed": false, 00:19:36.475 "zoned": false, 00:19:36.475 "supported_io_types": { 00:19:36.475 "read": true, 00:19:36.475 "write": true, 00:19:36.475 "unmap": true, 00:19:36.475 "write_zeroes": true, 00:19:36.475 "flush": true, 00:19:36.475 "reset": true, 00:19:36.475 "compare": true, 00:19:36.475 "compare_and_write": true, 00:19:36.475 "abort": true, 00:19:36.475 "nvme_admin": true, 00:19:36.475 "nvme_io": true 00:19:36.475 }, 00:19:36.475 "memory_domains": [ 00:19:36.475 { 00:19:36.475 "dma_device_id": "system", 00:19:36.475 "dma_device_type": 1 00:19:36.475 } 00:19:36.475 ], 00:19:36.475 "driver_specific": { 00:19:36.475 "nvme": [ 00:19:36.475 { 00:19:36.475 "trid": { 00:19:36.475 "trtype": "TCP", 00:19:36.475 "adrfam": "IPv4", 00:19:36.475 "traddr": "10.0.0.2", 00:19:36.475 "trsvcid": "4420", 00:19:36.475 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:36.475 }, 00:19:36.475 "ctrlr_data": { 00:19:36.475 "cntlid": 1, 00:19:36.475 "vendor_id": "0x8086", 00:19:36.475 "model_number": "SPDK bdev Controller", 00:19:36.475 "serial_number": "SPDK0", 00:19:36.475 "firmware_revision": "24.09", 00:19:36.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:36.475 "oacs": { 00:19:36.475 "security": 0, 00:19:36.475 "format": 0, 00:19:36.475 "firmware": 0, 00:19:36.475 "ns_manage": 0 00:19:36.475 }, 00:19:36.475 "multi_ctrlr": true, 00:19:36.475 "ana_reporting": false 00:19:36.475 }, 00:19:36.475 "vs": { 00:19:36.475 "nvme_version": "1.3" 00:19:36.475 }, 00:19:36.475 "ns_data": { 00:19:36.475 "id": 1, 00:19:36.475 "can_share": true 00:19:36.475 } 00:19:36.475 } 00:19:36.475 ], 00:19:36.475 "mp_policy": "active_passive" 00:19:36.475 } 00:19:36.475 } 00:19:36.475 ] 00:19:36.475 13:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.475 13:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1394546 00:19:36.475 13:47:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:36.475 Running I/O for 10 seconds... 00:19:37.409 Latency(us) 00:19:37.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:37.409 Nvme0n1 : 1.00 16911.00 66.06 0.00 0.00 0.00 0.00 0.00 00:19:37.409 =================================================================================================================== 00:19:37.409 Total : 16911.00 66.06 0.00 0.00 0.00 0.00 0.00 00:19:37.409 00:19:38.345 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:38.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:38.603 Nvme0n1 : 2.00 17031.50 66.53 0.00 0.00 0.00 0.00 0.00 00:19:38.603 =================================================================================================================== 00:19:38.603 Total : 17031.50 66.53 0.00 0.00 0.00 0.00 0.00 00:19:38.603 00:19:38.603 true 00:19:38.603 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:38.603 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:38.862 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:38.862 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:38.862 13:47:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1394546 00:19:39.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:39.429 Nvme0n1 : 3.00 17092.00 66.77 0.00 0.00 0.00 0.00 0.00 00:19:39.429 =================================================================================================================== 00:19:39.429 Total : 17092.00 66.77 0.00 0.00 0.00 0.00 0.00 00:19:39.429 00:19:40.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.806 Nvme0n1 : 4.00 17139.00 66.95 0.00 0.00 0.00 0.00 0.00 00:19:40.806 =================================================================================================================== 00:19:40.806 Total : 17139.00 66.95 0.00 0.00 0.00 0.00 0.00 00:19:40.806 00:19:41.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:41.740 Nvme0n1 : 5.00 17167.40 67.06 0.00 0.00 0.00 0.00 0.00 00:19:41.740 =================================================================================================================== 00:19:41.740 Total : 17167.40 67.06 0.00 0.00 0.00 0.00 0.00 00:19:41.740 00:19:42.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:42.677 Nvme0n1 : 6.00 17199.50 67.19 0.00 0.00 0.00 0.00 0.00 00:19:42.677 =================================================================================================================== 00:19:42.677 Total : 17199.50 67.19 0.00 0.00 0.00 0.00 0.00 00:19:42.677 00:19:43.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:43.612 Nvme0n1 : 7.00 17217.71 67.26 0.00 0.00 0.00 0.00 0.00 00:19:43.612 =================================================================================================================== 00:19:43.612 Total : 17217.71 67.26 0.00 0.00 0.00 0.00 0.00 00:19:43.612 00:19:44.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:44.549 Nvme0n1 : 8.00 17241.25 67.35 0.00 0.00 0.00 0.00 0.00 00:19:44.549 =================================================================================================================== 00:19:44.549 Total : 17241.25 67.35 0.00 0.00 0.00 0.00 0.00 00:19:44.549 00:19:45.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:45.486 Nvme0n1 : 9.00 17259.89 67.42 0.00 0.00 0.00 0.00 0.00 00:19:45.486 =================================================================================================================== 00:19:45.486 Total : 17259.89 67.42 0.00 0.00 0.00 0.00 0.00 00:19:45.486 00:19:46.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.422 Nvme0n1 : 10.00 17261.90 67.43 0.00 0.00 0.00 0.00 0.00 00:19:46.422 =================================================================================================================== 00:19:46.422 Total : 17261.90 67.43 0.00 0.00 0.00 0.00 0.00 00:19:46.422 00:19:46.682 00:19:46.682 Latency(us) 00:19:46.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.682 Nvme0n1 : 10.00 17266.65 67.45 0.00 0.00 7408.25 2215.12 13107.20 00:19:46.682 =================================================================================================================== 00:19:46.682 Total : 17266.65 67.45 0.00 0.00 7408.25 2215.12 13107.20 00:19:46.682 0 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1394307 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1394307 ']' 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1394307 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1394307 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1394307' 00:19:46.682 killing process with pid 1394307 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1394307 00:19:46.682 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.682 00:19:46.682 Latency(us) 00:19:46.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.682 =================================================================================================================== 00:19:46.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.682 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1394307 00:19:46.940 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:46.940 13:47:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:47.199 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:47.199 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:19:47.458 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:19:47.458 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:19:47.458 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:47.717 [2024-06-11 13:47:40.498138] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:47.717 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:47.976 request: 00:19:47.976 { 00:19:47.976 "uuid": "b3cd6562-cb60-4209-aa2d-ce4daf0db956", 00:19:47.976 "method": "bdev_lvol_get_lvstores", 00:19:47.976 "req_id": 1 00:19:47.976 } 00:19:47.976 Got JSON-RPC error response 00:19:47.976 response: 00:19:47.976 { 00:19:47.976 "code": -19, 00:19:47.976 "message": "No such device" 00:19:47.976 } 00:19:47.976 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:19:47.976 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:47.976 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:47.976 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:47.976 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:48.235 aio_bdev 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 98e60073-3358-446a-8039-930b53ed4017 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=98e60073-3358-446a-8039-930b53ed4017 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:19:48.235 13:47:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:48.493 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 98e60073-3358-446a-8039-930b53ed4017 -t 2000 00:19:48.753 [ 00:19:48.753 { 00:19:48.753 "name": "98e60073-3358-446a-8039-930b53ed4017", 00:19:48.753 "aliases": [ 00:19:48.753 "lvs/lvol" 00:19:48.753 ], 00:19:48.753 "product_name": "Logical Volume", 00:19:48.753 "block_size": 4096, 00:19:48.753 "num_blocks": 38912, 00:19:48.753 "uuid": "98e60073-3358-446a-8039-930b53ed4017", 00:19:48.753 "assigned_rate_limits": { 00:19:48.753 "rw_ios_per_sec": 0, 00:19:48.753 "rw_mbytes_per_sec": 0, 00:19:48.753 "r_mbytes_per_sec": 0, 00:19:48.753 "w_mbytes_per_sec": 0 00:19:48.753 }, 00:19:48.753 "claimed": false, 00:19:48.753 "zoned": false, 00:19:48.753 "supported_io_types": { 00:19:48.753 "read": true, 00:19:48.753 "write": true, 00:19:48.753 "unmap": true, 00:19:48.753 "write_zeroes": true, 00:19:48.753 "flush": false, 00:19:48.753 "reset": true, 00:19:48.753 "compare": false, 00:19:48.753 "compare_and_write": false, 00:19:48.753 "abort": false, 00:19:48.753 "nvme_admin": false, 00:19:48.753 "nvme_io": false 00:19:48.753 }, 00:19:48.753 "driver_specific": { 00:19:48.753 "lvol": { 00:19:48.753 "lvol_store_uuid": "b3cd6562-cb60-4209-aa2d-ce4daf0db956", 00:19:48.753 "base_bdev": "aio_bdev", 00:19:48.753 "thin_provision": false, 00:19:48.753 "num_allocated_clusters": 38, 00:19:48.753 "snapshot": false, 00:19:48.753 "clone": false, 00:19:48.753 "esnap_clone": false 00:19:48.753 } 00:19:48.753 } 00:19:48.753 } 00:19:48.753 ] 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:48.753 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:49.012 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:49.012 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98e60073-3358-446a-8039-930b53ed4017 00:19:49.271 13:47:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3cd6562-cb60-4209-aa2d-ce4daf0db956 00:19:49.530 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:49.530 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:49.789 00:19:49.789 real 0m16.843s 00:19:49.789 user 0m16.082s 00:19:49.789 sys 0m2.040s 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.789 ************************************ 00:19:49.789 END TEST lvs_grow_clean 00:19:49.789 ************************************ 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:49.789 ************************************ 00:19:49.789 START TEST lvs_grow_dirty 00:19:49.789 ************************************ 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:49.789 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:50.047 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:50.048 13:47:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:50.307 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=41e10053-077d-4c32-b122-1bbb337c50e5 00:19:50.307 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:19:50.307 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:50.599 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:50.599 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:50.599 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 41e10053-077d-4c32-b122-1bbb337c50e5 lvol 150 00:19:50.600 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:19:50.600 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:50.600 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:50.858 [2024-06-11 13:47:43.565453] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:50.858 [2024-06-11 13:47:43.565520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:50.858 true 00:19:50.858 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:19:50.858 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:50.858 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:50.858 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:51.117 13:47:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:19:51.376 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.635 [2024-06-11 13:47:44.307745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1397164 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1397164 /var/tmp/bdevperf.sock 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1397164 ']' 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:51.635 13:47:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:51.635 [2024-06-11 13:47:44.531886] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:51.635 [2024-06-11 13:47:44.531951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1397164 ] 00:19:51.894 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.894 [2024-06-11 13:47:44.625512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.894 [2024-06-11 13:47:44.711138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.831 13:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:52.831 13:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:19:52.831 13:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:53.091 Nvme0n1 00:19:53.091 13:47:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:53.091 [ 00:19:53.091 { 00:19:53.091 "name": "Nvme0n1", 00:19:53.091 "aliases": [ 00:19:53.091 "d24ce99a-aa63-4ab6-bf9d-0a020eda4960" 00:19:53.091 ], 00:19:53.091 "product_name": "NVMe disk", 00:19:53.091 "block_size": 4096, 00:19:53.091 "num_blocks": 38912, 00:19:53.091 "uuid": "d24ce99a-aa63-4ab6-bf9d-0a020eda4960", 00:19:53.091 "assigned_rate_limits": { 00:19:53.091 "rw_ios_per_sec": 0, 00:19:53.091 "rw_mbytes_per_sec": 0, 00:19:53.091 "r_mbytes_per_sec": 0, 00:19:53.091 "w_mbytes_per_sec": 0 00:19:53.091 }, 00:19:53.091 "claimed": false, 00:19:53.091 "zoned": false, 00:19:53.091 "supported_io_types": { 00:19:53.091 "read": true, 00:19:53.091 "write": true, 00:19:53.091 "unmap": true, 00:19:53.091 "write_zeroes": true, 00:19:53.091 "flush": true, 00:19:53.091 "reset": true, 00:19:53.091 "compare": true, 00:19:53.091 "compare_and_write": true, 00:19:53.091 "abort": true, 00:19:53.091 "nvme_admin": true, 00:19:53.091 "nvme_io": true 00:19:53.091 }, 00:19:53.091 "memory_domains": [ 00:19:53.091 { 00:19:53.091 "dma_device_id": "system", 00:19:53.091 "dma_device_type": 1 00:19:53.091 } 00:19:53.091 ], 00:19:53.091 "driver_specific": { 00:19:53.091 "nvme": [ 00:19:53.091 { 00:19:53.091 "trid": { 00:19:53.091 "trtype": "TCP", 00:19:53.091 "adrfam": "IPv4", 00:19:53.091 "traddr": "10.0.0.2", 00:19:53.091 "trsvcid": "4420", 00:19:53.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:53.091 }, 00:19:53.091 "ctrlr_data": { 00:19:53.091 "cntlid": 1, 00:19:53.091 "vendor_id": "0x8086", 00:19:53.091 "model_number": "SPDK bdev Controller", 00:19:53.091 "serial_number": "SPDK0", 00:19:53.091 "firmware_revision": "24.09", 00:19:53.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.091 "oacs": { 00:19:53.091 "security": 0, 00:19:53.091 "format": 0, 00:19:53.091 "firmware": 0, 00:19:53.091 "ns_manage": 0 00:19:53.091 }, 00:19:53.091 "multi_ctrlr": true, 00:19:53.091 "ana_reporting": false 00:19:53.091 }, 00:19:53.091 "vs": { 00:19:53.091 "nvme_version": "1.3" 00:19:53.091 }, 00:19:53.091 "ns_data": { 00:19:53.091 "id": 1, 00:19:53.091 "can_share": true 00:19:53.091 } 00:19:53.091 } 00:19:53.091 ], 00:19:53.091 "mp_policy": "active_passive" 00:19:53.091 } 00:19:53.091 } 00:19:53.091 ] 00:19:53.350 13:47:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1397430 00:19:53.350 13:47:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:53.350 13:47:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.350 Running I/O for 10 seconds... 00:19:54.287 Latency(us) 00:19:54.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.287 Nvme0n1 : 1.00 17038.00 66.55 0.00 0.00 0.00 0.00 0.00 00:19:54.287 =================================================================================================================== 00:19:54.287 Total : 17038.00 66.55 0.00 0.00 0.00 0.00 0.00 00:19:54.287 00:19:55.223 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:19:55.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.223 Nvme0n1 : 2.00 17120.00 66.88 0.00 0.00 0.00 0.00 0.00 00:19:55.223 =================================================================================================================== 00:19:55.223 Total : 17120.00 66.88 0.00 0.00 0.00 0.00 0.00 00:19:55.223 00:19:55.481 true 00:19:55.481 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:19:55.481 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:55.740 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:55.740 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:55.740 13:47:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1397430 00:19:56.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:56.308 Nvme0n1 : 3.00 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:19:56.308 =================================================================================================================== 00:19:56.308 Total : 17187.33 67.14 0.00 0.00 0.00 0.00 0.00 00:19:56.308 00:19:57.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.245 Nvme0n1 : 4.00 17194.50 67.17 0.00 0.00 0.00 0.00 0.00 00:19:57.245 =================================================================================================================== 00:19:57.245 Total : 17194.50 67.17 0.00 0.00 0.00 0.00 0.00 00:19:57.245 00:19:58.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:58.622 Nvme0n1 : 5.00 17224.20 67.28 0.00 0.00 0.00 0.00 0.00 00:19:58.622 =================================================================================================================== 00:19:58.622 Total : 17224.20 67.28 0.00 0.00 0.00 0.00 0.00 00:19:58.622 00:19:59.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.558 Nvme0n1 : 6.00 17242.67 67.35 0.00 0.00 0.00 0.00 0.00 00:19:59.558 =================================================================================================================== 00:19:59.558 Total : 17242.67 67.35 0.00 0.00 0.00 0.00 0.00 00:19:59.558 00:20:00.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:00.494 Nvme0n1 : 7.00 17266.14 67.45 0.00 0.00 0.00 0.00 0.00 00:20:00.494 =================================================================================================================== 00:20:00.494 Total : 17266.14 67.45 0.00 0.00 0.00 0.00 0.00 00:20:00.494 00:20:01.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.430 Nvme0n1 : 8.00 17284.00 67.52 0.00 0.00 0.00 0.00 0.00 00:20:01.430 =================================================================================================================== 00:20:01.430 Total : 17284.00 67.52 0.00 0.00 0.00 0.00 0.00 00:20:01.430 00:20:02.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.367 Nvme0n1 : 9.00 17299.56 67.58 0.00 0.00 0.00 0.00 0.00 00:20:02.367 =================================================================================================================== 00:20:02.367 Total : 17299.56 67.58 0.00 0.00 0.00 0.00 0.00 00:20:02.367 00:20:03.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.304 Nvme0n1 : 10.00 17308.60 67.61 0.00 0.00 0.00 0.00 0.00 00:20:03.304 =================================================================================================================== 00:20:03.304 Total : 17308.60 67.61 0.00 0.00 0.00 0.00 0.00 00:20:03.304 00:20:03.304 00:20:03.304 Latency(us) 00:20:03.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.304 Nvme0n1 : 10.01 17309.71 67.62 0.00 0.00 7389.74 1992.29 15833.50 00:20:03.304 =================================================================================================================== 00:20:03.304 Total : 17309.71 67.62 0.00 0.00 7389.74 1992.29 15833.50 00:20:03.304 0 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1397164 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1397164 ']' 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1397164 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:03.304 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1397164 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1397164' 00:20:03.564 killing process with pid 1397164 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1397164 00:20:03.564 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.564 00:20:03.564 Latency(us) 00:20:03.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.564 =================================================================================================================== 00:20:03.564 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1397164 00:20:03.564 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.823 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:04.082 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:04.082 13:47:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1393630 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1393630 00:20:04.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1393630 Killed "${NVMF_APP[@]}" "$@" 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1399302 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1399302 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1399302 ']' 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:04.342 13:47:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:04.342 [2024-06-11 13:47:57.228967] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:04.342 [2024-06-11 13:47:57.229028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.601 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.601 [2024-06-11 13:47:57.338245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.601 [2024-06-11 13:47:57.423161] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.601 [2024-06-11 13:47:57.423202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.601 [2024-06-11 13:47:57.423215] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.601 [2024-06-11 13:47:57.423228] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.601 [2024-06-11 13:47:57.423238] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.601 [2024-06-11 13:47:57.423273] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:05.538 [2024-06-11 13:47:58.374671] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:05.538 [2024-06-11 13:47:58.374779] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:05.538 [2024-06-11 13:47:58.374815] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:05.538 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:05.797 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d24ce99a-aa63-4ab6-bf9d-0a020eda4960 -t 2000 00:20:06.056 [ 00:20:06.056 { 00:20:06.056 "name": "d24ce99a-aa63-4ab6-bf9d-0a020eda4960", 00:20:06.056 "aliases": [ 00:20:06.056 "lvs/lvol" 00:20:06.056 ], 00:20:06.056 "product_name": "Logical Volume", 00:20:06.056 "block_size": 4096, 00:20:06.056 "num_blocks": 38912, 00:20:06.056 "uuid": "d24ce99a-aa63-4ab6-bf9d-0a020eda4960", 00:20:06.056 "assigned_rate_limits": { 00:20:06.056 "rw_ios_per_sec": 0, 00:20:06.056 "rw_mbytes_per_sec": 0, 00:20:06.056 "r_mbytes_per_sec": 0, 00:20:06.056 "w_mbytes_per_sec": 0 00:20:06.056 }, 00:20:06.056 "claimed": false, 00:20:06.056 "zoned": false, 00:20:06.056 "supported_io_types": { 00:20:06.056 "read": true, 00:20:06.056 "write": true, 00:20:06.056 "unmap": true, 00:20:06.056 "write_zeroes": true, 00:20:06.056 "flush": false, 00:20:06.056 "reset": true, 00:20:06.056 "compare": false, 00:20:06.056 "compare_and_write": false, 00:20:06.056 "abort": false, 00:20:06.056 "nvme_admin": false, 00:20:06.056 "nvme_io": false 00:20:06.056 }, 00:20:06.056 "driver_specific": { 00:20:06.056 "lvol": { 00:20:06.056 "lvol_store_uuid": "41e10053-077d-4c32-b122-1bbb337c50e5", 00:20:06.056 "base_bdev": "aio_bdev", 00:20:06.056 "thin_provision": false, 00:20:06.056 "num_allocated_clusters": 38, 00:20:06.056 "snapshot": false, 00:20:06.056 "clone": false, 00:20:06.056 "esnap_clone": false 00:20:06.056 } 00:20:06.056 } 00:20:06.057 } 00:20:06.057 ] 00:20:06.057 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:20:06.057 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:06.057 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:06.316 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:06.316 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:06.316 13:47:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:06.316 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:06.316 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:06.575 [2024-06-11 13:47:59.390739] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:20:06.575 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:06.834 request: 00:20:06.834 { 00:20:06.834 "uuid": "41e10053-077d-4c32-b122-1bbb337c50e5", 00:20:06.834 "method": "bdev_lvol_get_lvstores", 00:20:06.834 "req_id": 1 00:20:06.834 } 00:20:06.834 Got JSON-RPC error response 00:20:06.834 response: 00:20:06.834 { 00:20:06.834 "code": -19, 00:20:06.834 "message": "No such device" 00:20:06.834 } 00:20:06.834 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:20:06.834 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.834 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:06.834 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.834 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:07.093 aio_bdev 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:07.093 13:47:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:07.352 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d24ce99a-aa63-4ab6-bf9d-0a020eda4960 -t 2000 00:20:07.611 [ 00:20:07.611 { 00:20:07.611 "name": "d24ce99a-aa63-4ab6-bf9d-0a020eda4960", 00:20:07.611 "aliases": [ 00:20:07.611 "lvs/lvol" 00:20:07.611 ], 00:20:07.611 "product_name": "Logical Volume", 00:20:07.611 "block_size": 4096, 00:20:07.611 "num_blocks": 38912, 00:20:07.611 "uuid": "d24ce99a-aa63-4ab6-bf9d-0a020eda4960", 00:20:07.611 "assigned_rate_limits": { 00:20:07.611 "rw_ios_per_sec": 0, 00:20:07.611 "rw_mbytes_per_sec": 0, 00:20:07.611 "r_mbytes_per_sec": 0, 00:20:07.611 "w_mbytes_per_sec": 0 00:20:07.611 }, 00:20:07.611 "claimed": false, 00:20:07.611 "zoned": false, 00:20:07.611 "supported_io_types": { 00:20:07.611 "read": true, 00:20:07.611 "write": true, 00:20:07.611 "unmap": true, 00:20:07.611 "write_zeroes": true, 00:20:07.611 "flush": false, 00:20:07.611 "reset": true, 00:20:07.611 "compare": false, 00:20:07.611 "compare_and_write": false, 00:20:07.611 "abort": false, 00:20:07.611 "nvme_admin": false, 00:20:07.611 "nvme_io": false 00:20:07.611 }, 00:20:07.611 "driver_specific": { 00:20:07.611 "lvol": { 00:20:07.611 "lvol_store_uuid": "41e10053-077d-4c32-b122-1bbb337c50e5", 00:20:07.611 "base_bdev": "aio_bdev", 00:20:07.611 "thin_provision": false, 00:20:07.611 "num_allocated_clusters": 38, 00:20:07.611 "snapshot": false, 00:20:07.611 "clone": false, 00:20:07.611 "esnap_clone": false 00:20:07.611 } 00:20:07.611 } 00:20:07.611 } 00:20:07.611 ] 00:20:07.611 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:20:07.611 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:07.612 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:07.612 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:07.612 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:07.612 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:07.870 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:07.870 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d24ce99a-aa63-4ab6-bf9d-0a020eda4960 00:20:08.129 13:48:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41e10053-077d-4c32-b122-1bbb337c50e5 00:20:08.387 13:48:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:08.387 13:48:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:08.387 00:20:08.387 real 0m18.714s 00:20:08.387 user 0m47.378s 00:20:08.387 sys 0m4.694s 00:20:08.387 13:48:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:08.387 13:48:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:08.387 ************************************ 00:20:08.387 END TEST lvs_grow_dirty 00:20:08.387 ************************************ 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:08.663 nvmf_trace.0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.663 rmmod nvme_tcp 00:20:08.663 rmmod nvme_fabrics 00:20:08.663 rmmod nvme_keyring 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1399302 ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1399302 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1399302 ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1399302 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1399302 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1399302' 00:20:08.663 killing process with pid 1399302 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1399302 00:20:08.663 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1399302 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.947 13:48:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.852 13:48:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.852 00:20:10.852 real 0m46.611s 00:20:10.852 user 1m10.319s 00:20:10.852 sys 0m12.690s 00:20:10.852 13:48:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:10.852 13:48:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:10.852 ************************************ 00:20:10.852 END TEST nvmf_lvs_grow 00:20:10.852 ************************************ 00:20:11.111 13:48:03 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:11.111 13:48:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:11.111 13:48:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:11.111 13:48:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.111 ************************************ 00:20:11.111 START TEST nvmf_bdev_io_wait 00:20:11.111 ************************************ 00:20:11.111 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:11.111 * Looking for test storage... 00:20:11.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.111 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:20:11.112 13:48:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:17.680 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:17.680 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:17.680 Found net devices under 0000:af:00.0: cvl_0_0 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:17.680 Found net devices under 0000:af:00.1: cvl_0_1 00:20:17.680 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:17.681 00:20:17.681 --- 10.0.0.2 ping statistics --- 00:20:17.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.681 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:17.681 00:20:17.681 --- 10.0.0.1 ping statistics --- 00:20:17.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.681 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1404243 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1404243 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1404243 ']' 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:17.681 13:48:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:17.681 [2024-06-11 13:48:10.566878] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:17.681 [2024-06-11 13:48:10.566937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.939 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.939 [2024-06-11 13:48:10.676472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.939 [2024-06-11 13:48:10.760946] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.939 [2024-06-11 13:48:10.760996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.939 [2024-06-11 13:48:10.761009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.939 [2024-06-11 13:48:10.761022] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.939 [2024-06-11 13:48:10.761032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.939 [2024-06-11 13:48:10.761095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.939 [2024-06-11 13:48:10.761188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.939 [2024-06-11 13:48:10.761301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.939 [2024-06-11 13:48:10.761301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 [2024-06-11 13:48:11.601705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 Malloc0 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:18.875 [2024-06-11 13:48:11.661934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1404433 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1404435 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.875 { 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme$subsystem", 00:20:18.875 "trtype": "$TEST_TRANSPORT", 00:20:18.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "$NVMF_PORT", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.875 "hdgst": ${hdgst:-false}, 00:20:18.875 "ddgst": ${ddgst:-false} 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 } 00:20:18.875 EOF 00:20:18.875 )") 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1404437 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:18.875 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.876 { 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme$subsystem", 00:20:18.876 "trtype": "$TEST_TRANSPORT", 00:20:18.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "$NVMF_PORT", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.876 "hdgst": ${hdgst:-false}, 00:20:18.876 "ddgst": ${ddgst:-false} 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 } 00:20:18.876 EOF 00:20:18.876 )") 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1404440 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.876 { 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme$subsystem", 00:20:18.876 "trtype": "$TEST_TRANSPORT", 00:20:18.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "$NVMF_PORT", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.876 "hdgst": ${hdgst:-false}, 00:20:18.876 "ddgst": ${ddgst:-false} 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 } 00:20:18.876 EOF 00:20:18.876 )") 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:18.876 { 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme$subsystem", 00:20:18.876 "trtype": "$TEST_TRANSPORT", 00:20:18.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "$NVMF_PORT", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.876 "hdgst": ${hdgst:-false}, 00:20:18.876 "ddgst": ${ddgst:-false} 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 } 00:20:18.876 EOF 00:20:18.876 )") 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1404433 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme1", 00:20:18.876 "trtype": "tcp", 00:20:18.876 "traddr": "10.0.0.2", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "4420", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.876 "hdgst": false, 00:20:18.876 "ddgst": false 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 }' 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme1", 00:20:18.876 "trtype": "tcp", 00:20:18.876 "traddr": "10.0.0.2", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "4420", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.876 "hdgst": false, 00:20:18.876 "ddgst": false 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 }' 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme1", 00:20:18.876 "trtype": "tcp", 00:20:18.876 "traddr": "10.0.0.2", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "4420", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.876 "hdgst": false, 00:20:18.876 "ddgst": false 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 }' 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:18.876 13:48:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:18.876 "params": { 00:20:18.876 "name": "Nvme1", 00:20:18.876 "trtype": "tcp", 00:20:18.876 "traddr": "10.0.0.2", 00:20:18.876 "adrfam": "ipv4", 00:20:18.876 "trsvcid": "4420", 00:20:18.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.876 "hdgst": false, 00:20:18.876 "ddgst": false 00:20:18.876 }, 00:20:18.876 "method": "bdev_nvme_attach_controller" 00:20:18.876 }' 00:20:18.876 [2024-06-11 13:48:11.715308] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:18.876 [2024-06-11 13:48:11.715375] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:18.876 [2024-06-11 13:48:11.716045] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:18.876 [2024-06-11 13:48:11.716105] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:18.876 [2024-06-11 13:48:11.718358] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:18.876 [2024-06-11 13:48:11.718414] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:18.876 [2024-06-11 13:48:11.721914] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:18.876 [2024-06-11 13:48:11.721974] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:18.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.135 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.135 [2024-06-11 13:48:11.895431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.135 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.135 [2024-06-11 13:48:11.972148] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:20:19.135 [2024-06-11 13:48:11.987462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.135 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.392 [2024-06-11 13:48:12.052060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.393 [2024-06-11 13:48:12.072795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:20:19.393 [2024-06-11 13:48:12.132172] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:20:19.393 [2024-06-11 13:48:12.152963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.393 [2024-06-11 13:48:12.246010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:20:19.393 Running I/O for 1 seconds... 00:20:19.651 Running I/O for 1 seconds... 00:20:19.651 Running I/O for 1 seconds... 00:20:19.651 Running I/O for 1 seconds... 00:20:20.590 00:20:20.590 Latency(us) 00:20:20.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.590 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:20.590 Nvme1n1 : 1.00 184159.91 719.37 0.00 0.00 692.46 285.08 832.31 00:20:20.590 =================================================================================================================== 00:20:20.590 Total : 184159.91 719.37 0.00 0.00 692.46 285.08 832.31 00:20:20.590 00:20:20.590 Latency(us) 00:20:20.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.590 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:20.590 Nvme1n1 : 1.02 6257.39 24.44 0.00 0.00 20243.97 7025.46 28521.27 00:20:20.590 =================================================================================================================== 00:20:20.590 Total : 6257.39 24.44 0.00 0.00 20243.97 7025.46 28521.27 00:20:20.590 00:20:20.590 Latency(us) 00:20:20.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.590 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:20.590 Nvme1n1 : 1.01 10320.66 40.32 0.00 0.00 12353.29 6763.32 22649.24 00:20:20.590 =================================================================================================================== 00:20:20.590 Total : 10320.66 40.32 0.00 0.00 12353.29 6763.32 22649.24 00:20:20.590 00:20:20.590 Latency(us) 00:20:20.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.590 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:20.590 Nvme1n1 : 1.01 5927.79 23.16 0.00 0.00 21520.35 6239.03 46556.77 00:20:20.590 =================================================================================================================== 00:20:20.590 Total : 5927.79 23.16 0.00 0.00 21520.35 6239.03 46556.77 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1404435 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1404437 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1404440 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.848 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.848 rmmod nvme_tcp 00:20:20.848 rmmod nvme_fabrics 00:20:21.107 rmmod nvme_keyring 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1404243 ']' 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1404243 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1404243 ']' 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1404243 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1404243 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1404243' 00:20:21.107 killing process with pid 1404243 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1404243 00:20:21.107 13:48:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1404243 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.366 13:48:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.269 13:48:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.269 00:20:23.269 real 0m12.278s 00:20:23.269 user 0m20.236s 00:20:23.269 sys 0m7.042s 00:20:23.269 13:48:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:23.269 13:48:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:23.269 ************************************ 00:20:23.269 END TEST nvmf_bdev_io_wait 00:20:23.269 ************************************ 00:20:23.269 13:48:16 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:23.269 13:48:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:23.269 13:48:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:23.269 13:48:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.528 ************************************ 00:20:23.528 START TEST nvmf_queue_depth 00:20:23.528 ************************************ 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:23.528 * Looking for test storage... 00:20:23.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.528 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.529 13:48:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:30.100 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:30.100 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.100 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:30.101 Found net devices under 0000:af:00.0: cvl_0_0 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:30.101 Found net devices under 0000:af:00.1: cvl_0_1 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.101 13:48:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.101 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.101 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:30.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:30.360 00:20:30.360 --- 10.0.0.2 ping statistics --- 00:20:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.360 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:20:30.360 00:20:30.360 --- 10.0.0.1 ping statistics --- 00:20:30.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.360 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.360 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1408496 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1408496 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1408496 ']' 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.361 13:48:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:30.361 [2024-06-11 13:48:23.259626] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:30.361 [2024-06-11 13:48:23.259689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.620 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.620 [2024-06-11 13:48:23.356483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.620 [2024-06-11 13:48:23.442668] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.620 [2024-06-11 13:48:23.442708] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.620 [2024-06-11 13:48:23.442722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.620 [2024-06-11 13:48:23.442733] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.620 [2024-06-11 13:48:23.442743] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.620 [2024-06-11 13:48:23.442770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.557 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.557 [2024-06-11 13:48:24.275330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.558 Malloc0 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.558 [2024-06-11 13:48:24.326353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1408712 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1408712 /var/tmp/bdevperf.sock 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1408712 ']' 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:31.558 13:48:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:31.558 [2024-06-11 13:48:24.377860] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.558 [2024-06-11 13:48:24.377916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408712 ] 00:20:31.558 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.817 [2024-06-11 13:48:24.478273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.817 [2024-06-11 13:48:24.564790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.386 13:48:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:32.386 13:48:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:20:32.386 13:48:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:32.386 13:48:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.386 13:48:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:32.644 NVMe0n1 00:20:32.644 13:48:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.644 13:48:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.644 Running I/O for 10 seconds... 00:20:42.687 00:20:42.687 Latency(us) 00:20:42.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.687 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:42.687 Verification LBA range: start 0x0 length 0x4000 00:20:42.687 NVMe0n1 : 10.06 9171.17 35.82 0.00 0.00 111157.29 11219.76 80530.64 00:20:42.687 =================================================================================================================== 00:20:42.687 Total : 9171.17 35.82 0.00 0.00 111157.29 11219.76 80530.64 00:20:42.687 0 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1408712 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1408712 ']' 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1408712 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1408712 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1408712' 00:20:42.687 killing process with pid 1408712 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1408712 00:20:42.687 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.687 00:20:42.687 Latency(us) 00:20:42.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.687 =================================================================================================================== 00:20:42.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.687 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1408712 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.947 rmmod nvme_tcp 00:20:42.947 rmmod nvme_fabrics 00:20:42.947 rmmod nvme_keyring 00:20:42.947 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1408496 ']' 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1408496 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1408496 ']' 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1408496 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1408496 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1408496' 00:20:43.207 killing process with pid 1408496 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1408496 00:20:43.207 13:48:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1408496 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.466 13:48:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.375 13:48:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.375 00:20:45.375 real 0m21.994s 00:20:45.375 user 0m25.401s 00:20:45.375 sys 0m7.106s 00:20:45.375 13:48:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:45.375 13:48:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:45.375 ************************************ 00:20:45.375 END TEST nvmf_queue_depth 00:20:45.375 ************************************ 00:20:45.375 13:48:38 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:45.375 13:48:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:45.375 13:48:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:45.375 13:48:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.634 ************************************ 00:20:45.634 START TEST nvmf_target_multipath 00:20:45.634 ************************************ 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:45.634 * Looking for test storage... 00:20:45.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.634 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.635 13:48:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:52.206 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.206 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.465 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:52.466 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:52.466 Found net devices under 0000:af:00.0: cvl_0_0 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:52.466 Found net devices under 0000:af:00.1: cvl_0_1 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.466 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:20:52.725 00:20:52.725 --- 10.0.0.2 ping statistics --- 00:20:52.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.725 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:20:52.725 00:20:52.725 --- 10.0.0.1 ping statistics --- 00:20:52.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.725 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:20:52.725 only one NIC for nvmf test 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.725 rmmod nvme_tcp 00:20:52.725 rmmod nvme_fabrics 00:20:52.725 rmmod nvme_keyring 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.725 13:48:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.261 00:20:55.261 real 0m9.380s 00:20:55.261 user 0m1.892s 00:20:55.261 sys 0m5.514s 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:55.261 13:48:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:55.261 ************************************ 00:20:55.261 END TEST nvmf_target_multipath 00:20:55.261 ************************************ 00:20:55.262 13:48:47 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:55.262 13:48:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:55.262 13:48:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:55.262 13:48:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:55.262 ************************************ 00:20:55.262 START TEST nvmf_zcopy 00:20:55.262 ************************************ 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:55.262 * Looking for test storage... 00:20:55.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.262 13:48:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.833 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:01.834 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:01.834 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:01.834 Found net devices under 0000:af:00.0: cvl_0_0 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:01.834 Found net devices under 0000:af:00.1: cvl_0_1 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.834 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:21:02.094 00:21:02.094 --- 10.0.0.2 ping statistics --- 00:21:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.094 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:02.094 00:21:02.094 --- 10.0.0.1 ping statistics --- 00:21:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.094 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.094 13:48:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.353 13:48:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1417957 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1417957 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1417957 ']' 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:02.354 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:02.354 [2024-06-11 13:48:55.074679] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:02.354 [2024-06-11 13:48:55.074744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.354 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.354 [2024-06-11 13:48:55.173897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.354 [2024-06-11 13:48:55.258948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.354 [2024-06-11 13:48:55.258988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.354 [2024-06-11 13:48:55.259002] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.354 [2024-06-11 13:48:55.259013] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.354 [2024-06-11 13:48:55.259023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.354 [2024-06-11 13:48:55.259054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.290 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:03.290 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:21:03.290 13:48:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.290 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:03.290 13:48:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 [2024-06-11 13:48:56.031268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 [2024-06-11 13:48:56.047454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 malloc0 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:03.290 13:48:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.291 { 00:21:03.291 "params": { 00:21:03.291 "name": "Nvme$subsystem", 00:21:03.291 "trtype": "$TEST_TRANSPORT", 00:21:03.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.291 "adrfam": "ipv4", 00:21:03.291 "trsvcid": "$NVMF_PORT", 00:21:03.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.291 "hdgst": ${hdgst:-false}, 00:21:03.291 "ddgst": ${ddgst:-false} 00:21:03.291 }, 00:21:03.291 "method": "bdev_nvme_attach_controller" 00:21:03.291 } 00:21:03.291 EOF 00:21:03.291 )") 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:03.291 13:48:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.291 "params": { 00:21:03.291 "name": "Nvme1", 00:21:03.291 "trtype": "tcp", 00:21:03.291 "traddr": "10.0.0.2", 00:21:03.291 "adrfam": "ipv4", 00:21:03.291 "trsvcid": "4420", 00:21:03.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.291 "hdgst": false, 00:21:03.291 "ddgst": false 00:21:03.291 }, 00:21:03.291 "method": "bdev_nvme_attach_controller" 00:21:03.291 }' 00:21:03.291 [2024-06-11 13:48:56.130430] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:03.291 [2024-06-11 13:48:56.130497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418236 ] 00:21:03.291 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.550 [2024-06-11 13:48:56.231877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.550 [2024-06-11 13:48:56.313935] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.809 Running I/O for 10 seconds... 00:21:13.787 00:21:13.787 Latency(us) 00:21:13.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:13.787 Verification LBA range: start 0x0 length 0x1000 00:21:13.787 Nvme1n1 : 10.05 6378.62 49.83 0.00 0.00 19919.02 3211.26 41523.61 00:21:13.787 =================================================================================================================== 00:21:13.787 Total : 6378.62 49.83 0.00 0.00 19919.02 3211.26 41523.61 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1420063 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:14.046 [2024-06-11 13:49:06.798168] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.046 [2024-06-11 13:49:06.798205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:14.046 { 00:21:14.046 "params": { 00:21:14.046 "name": "Nvme$subsystem", 00:21:14.046 "trtype": "$TEST_TRANSPORT", 00:21:14.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.046 "adrfam": "ipv4", 00:21:14.046 "trsvcid": "$NVMF_PORT", 00:21:14.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.046 "hdgst": ${hdgst:-false}, 00:21:14.046 "ddgst": ${ddgst:-false} 00:21:14.046 }, 00:21:14.046 "method": "bdev_nvme_attach_controller" 00:21:14.046 } 00:21:14.046 EOF 00:21:14.046 )") 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:14.046 [2024-06-11 13:49:06.810173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.046 [2024-06-11 13:49:06.810190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:14.046 13:49:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:14.046 "params": { 00:21:14.046 "name": "Nvme1", 00:21:14.046 "trtype": "tcp", 00:21:14.046 "traddr": "10.0.0.2", 00:21:14.046 "adrfam": "ipv4", 00:21:14.046 "trsvcid": "4420", 00:21:14.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.046 "hdgst": false, 00:21:14.046 "ddgst": false 00:21:14.047 }, 00:21:14.047 "method": "bdev_nvme_attach_controller" 00:21:14.047 }' 00:21:14.047 [2024-06-11 13:49:06.822202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.822218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.834238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.834253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.842758] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:14.047 [2024-06-11 13:49:06.842816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420063 ] 00:21:14.047 [2024-06-11 13:49:06.846268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.846283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.858303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.858319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.870335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.870350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.882369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.882383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.047 [2024-06-11 13:49:06.894403] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.894418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.906435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.906450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.918469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.918490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.930507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.930522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.942539] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.942554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.047 [2024-06-11 13:49:06.943623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.047 [2024-06-11 13:49:06.954570] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.047 [2024-06-11 13:49:06.954587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.306 [2024-06-11 13:49:06.966602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.306 [2024-06-11 13:49:06.966618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.306 [2024-06-11 13:49:06.978636] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.306 [2024-06-11 13:49:06.978651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.306 [2024-06-11 13:49:06.990674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.306 [2024-06-11 13:49:06.990699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.306 [2024-06-11 13:49:07.002704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.002720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.014737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.014761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.026772] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.026787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.027721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.307 [2024-06-11 13:49:07.038808] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.038828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.050845] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.050867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.062876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.062892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.074909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.074928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.086944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.086961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.098975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.098991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.111013] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.111032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.123055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.123079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.135083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.135102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.147119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.147138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.159145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.159160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.171176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.171191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.183210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.183225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.195246] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.195266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.307 [2024-06-11 13:49:07.207279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.307 [2024-06-11 13:49:07.207296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.219312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.219327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.231348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.231363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.243389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.243407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.255421] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.255435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.267458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.267472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.279498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.279513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.291540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.291559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.303571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.303586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.315605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.315620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.327643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.327660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.339674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.339689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.351720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.351743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 Running I/O for 5 seconds... 00:21:14.586 [2024-06-11 13:49:07.369553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.369578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.385533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.385557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.401871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.401894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.419163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.419187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.435864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.435888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.452453] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.452482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.586 [2024-06-11 13:49:07.469473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.586 [2024-06-11 13:49:07.469502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.485817] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.485842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.503092] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.503117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.519869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.519893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.536311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.536335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.552740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.552764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.570097] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.570126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.585621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.585646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.597627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.597651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.615089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.615114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.630528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.630551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.648258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.648282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.663492] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.663517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.675120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.675144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.691324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.691348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.706218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.706244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.722610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.722636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.738107] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.738131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.756113] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.756136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.770219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.770242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.881 [2024-06-11 13:49:07.786376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.881 [2024-06-11 13:49:07.786399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.803669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.803693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.819750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.819775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.836882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.836906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.854640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.854664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.870006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.870030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.888405] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.888431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.903837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.903863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.915795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.915821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.932344] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.932369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.946799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.946824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.962792] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.962816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.979346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.979370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:07.995831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:07.995856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:08.012721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:08.012745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:08.030034] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:08.030059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.141 [2024-06-11 13:49:08.045465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.141 [2024-06-11 13:49:08.045496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.400 [2024-06-11 13:49:08.057032] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.400 [2024-06-11 13:49:08.057056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.074272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.074296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.089621] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.089646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.105356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.105385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.122657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.122682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.139300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.139325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.155201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.155226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.173148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.173173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.188903] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.188927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.206465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.206495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.222764] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.222787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.240031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.240056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.256398] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.256423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.273076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.273100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.290710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.290734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.401 [2024-06-11 13:49:08.306613] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.401 [2024-06-11 13:49:08.306638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.325561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.325584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.339954] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.339978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.357240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.357265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.372666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.372692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.384432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.384456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.401178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.401202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.415411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.415441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.431901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.431925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.447725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.447749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.465819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.465844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.480200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.480224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.496068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.496092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.514106] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.514130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.529606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.529629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.547630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.547658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.660 [2024-06-11 13:49:08.562071] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.660 [2024-06-11 13:49:08.562096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.579137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.579162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.595199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.595223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.612737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.612761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.628625] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.628648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.647039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.647063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.661538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.661563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.677568] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.677591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.694007] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.694032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.710947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.710970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.727959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.727987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.743409] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.743434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.759557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.759582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.776288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.776312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.792466] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.792497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.809950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.809974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.919 [2024-06-11 13:49:08.825903] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.919 [2024-06-11 13:49:08.825927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.178 [2024-06-11 13:49:08.844916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.844940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.858940] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.858964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.877078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.877102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.891369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.891394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.902948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.902972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.920181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.920208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.934816] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.934840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.951236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.951260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.967413] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.967437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:08.984627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:08.984652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.001187] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.001211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.017438] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.017462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.035668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.035697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.050176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.050200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.066346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.066370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.179 [2024-06-11 13:49:09.083444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.179 [2024-06-11 13:49:09.083468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.100221] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.100244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.117330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.117356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.132322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.132347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.148859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.148884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.166217] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.166240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.181706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.181730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.193288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.193312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.210655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.210680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.226900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.226929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.244753] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.244777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.259543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.259568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.277710] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.277734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.292286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.292311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.309726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.309749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.324991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.325016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.438 [2024-06-11 13:49:09.336370] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.438 [2024-06-11 13:49:09.336394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.352894] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.352918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.369481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.369506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.386103] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.386127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.401938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.401962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.413577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.413600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.431284] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.431308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.447610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.447634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.464879] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.464903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.481093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.481117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.498855] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.498880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.514379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.514403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.525777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.525800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.542567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.542597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.558780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.558804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.574712] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.574736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.586603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.586627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.698 [2024-06-11 13:49:09.603312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.698 [2024-06-11 13:49:09.603336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.620357] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.620382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.637220] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.637244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.653346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.653369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.671488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.671511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.686952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.686976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.698406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.698430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.715881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.715905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.731216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.731241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.742493] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.742517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.759619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.759644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.774795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.774820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.790400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.790424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.809037] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.809062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.822578] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.822604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.838537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.838562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.958 [2024-06-11 13:49:09.855146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.958 [2024-06-11 13:49:09.855170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.871400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.871424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.889003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.889028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.904458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.904492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.915850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.915874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.932467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.932497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.947978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.948002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.965778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.965802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.981686] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.981711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.217 [2024-06-11 13:49:09.999041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.217 [2024-06-11 13:49:09.999065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.016104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.016148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.032741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.032767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.050965] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.050990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.065770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.065796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.077630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.077656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.094131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.094155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.109814] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.109838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.218 [2024-06-11 13:49:10.121298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.218 [2024-06-11 13:49:10.121322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.138925] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.138951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.154049] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.154075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.170450] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.170474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.187495] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.187519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.204255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.204279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.220811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.220836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.239062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.239086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.253823] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.253847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.265515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.265538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.282606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.282630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.298336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.298360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.316152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.316176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.331638] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.331662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.340975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.340998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.356277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.356301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.477 [2024-06-11 13:49:10.372281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.477 [2024-06-11 13:49:10.372309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.390827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.390852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.405199] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.405224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.422824] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.422848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.438130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.438155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.449377] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.449402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.466740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.466765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.481087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.481112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.498890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.498915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.512560] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.512588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.527798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.527821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.539367] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.539391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.556513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.556536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.573401] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.573425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.591550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.591574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.606602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.606626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.623351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.623375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.737 [2024-06-11 13:49:10.639116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.737 [2024-06-11 13:49:10.639139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.649333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.649357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.663158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.663181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.679338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.679362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.697161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.697185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.712520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.712544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.723948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.723972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.739718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.739741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.756976] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.756999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.774094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.774117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.791610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.791633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.807347] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.807376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.819262] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.819286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.836105] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.836128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.851566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.851590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.861060] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.861084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.875948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.875972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.997 [2024-06-11 13:49:10.892099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.997 [2024-06-11 13:49:10.892123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.909066] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.909090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.925522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.925546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.941811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.941835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.958887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.958910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.975062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.975086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:10.993725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:10.993750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.008497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.008520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.026633] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.026668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.041249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.041275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.052678] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.052701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.069404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.069430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.086030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.086054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.103196] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.103226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.119006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.119031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.131003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.131027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.147745] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.147770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.257 [2024-06-11 13:49:11.163972] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.257 [2024-06-11 13:49:11.163998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.516 [2024-06-11 13:49:11.180897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.516 [2024-06-11 13:49:11.180922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.516 [2024-06-11 13:49:11.197670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.516 [2024-06-11 13:49:11.197695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.516 [2024-06-11 13:49:11.216111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.516 [2024-06-11 13:49:11.216136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.230867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.230891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.249105] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.249129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.264549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.264573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.275881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.275905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.292614] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.292639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.307171] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.307195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.323464] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.323499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.339883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.339909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.357351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.357377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.372517] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.372541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.383851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.383876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.401271] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.401304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.517 [2024-06-11 13:49:11.416699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.517 [2024-06-11 13:49:11.416722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.435838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.435862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.450288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.450313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.466333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.466357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.483779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.483803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.500146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.500170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.516166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.516191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.528030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.528054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.545519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.545544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.561076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.561101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.572648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.572672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.589252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.589276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.605022] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.605046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.622499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.622523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.639582] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.639606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.655117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.655141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.665005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.665028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.776 [2024-06-11 13:49:11.678619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.776 [2024-06-11 13:49:11.678642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.694935] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.694959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.710895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.710919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.728864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.728888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.743041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.743065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.759640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.759664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.775820] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.775843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.792889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.792912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.809512] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.809536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.826573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.826597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.843141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.843165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.859537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.859560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.875587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.875610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.893982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.894007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.908903] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.908927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.925400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.925424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.036 [2024-06-11 13:49:11.942063] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.036 [2024-06-11 13:49:11.942087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:11.958559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:11.958582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:11.975020] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:11.975043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:11.992048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:11.992071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.008411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.008435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.025372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.025396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.041025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.041049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.052193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.052217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.069945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.069969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.084580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.084604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.100781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.100805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.117937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.117965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.134164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.134187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.151439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.151463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.167707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.167730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.184345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.184369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.296 [2024-06-11 13:49:12.201065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.296 [2024-06-11 13:49:12.201089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.217351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.217374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.233503] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.233526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.251685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.251709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.267003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.267027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.278610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.278634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.296006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.296030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.311746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.311770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.330117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.330142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.344659] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.344682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.362258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.362284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.373429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.373453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 00:21:19.555 Latency(us) 00:21:19.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.555 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:19.555 Nvme1n1 : 5.01 12527.82 97.87 0.00 0.00 10205.62 4430.23 21705.52 00:21:19.555 =================================================================================================================== 00:21:19.555 Total : 12527.82 97.87 0.00 0.00 10205.62 4430.23 21705.52 00:21:19.555 [2024-06-11 13:49:12.385462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.385490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.397495] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.397512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.409539] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.409560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.421561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.421580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.433596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.433613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.445626] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.445643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.555 [2024-06-11 13:49:12.457657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.555 [2024-06-11 13:49:12.457674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.469689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.469706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.481723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.481740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.493754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.493769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.505790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.505812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.517820] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.517836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.529852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.529867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.541886] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.541901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.553919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.553934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 [2024-06-11 13:49:12.565952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:19.814 [2024-06-11 13:49:12.565967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:19.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1420063) - No such process 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1420063 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:19.814 delay0 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.814 13:49:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:19.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.073 [2024-06-11 13:49:12.764628] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:26.644 Initializing NVMe Controllers 00:21:26.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.644 Initialization complete. Launching workers. 00:21:26.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 183 00:21:26.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 470, failed to submit 33 00:21:26.644 success 282, unsuccess 188, failed 0 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:26.644 rmmod nvme_tcp 00:21:26.644 rmmod nvme_fabrics 00:21:26.644 rmmod nvme_keyring 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1417957 ']' 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1417957 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1417957 ']' 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1417957 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:26.644 13:49:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1417957 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1417957' 00:21:26.644 killing process with pid 1417957 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1417957 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1417957 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.644 13:49:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.595 13:49:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:28.595 00:21:28.595 real 0m33.553s 00:21:28.595 user 0m43.365s 00:21:28.595 sys 0m12.970s 00:21:28.596 13:49:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:28.596 13:49:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:28.596 ************************************ 00:21:28.596 END TEST nvmf_zcopy 00:21:28.596 ************************************ 00:21:28.596 13:49:21 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:28.596 13:49:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:28.596 13:49:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:28.596 13:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.596 ************************************ 00:21:28.596 START TEST nvmf_nmic 00:21:28.596 ************************************ 00:21:28.596 13:49:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:28.855 * Looking for test storage... 00:21:28.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.855 13:49:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.856 13:49:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.432 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:35.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:35.433 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:35.433 Found net devices under 0000:af:00.0: cvl_0_0 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:35.433 Found net devices under 0000:af:00.1: cvl_0_1 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.433 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.692 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.693 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.951 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.951 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:21:35.952 00:21:35.952 --- 10.0.0.2 ping statistics --- 00:21:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.952 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:35.952 00:21:35.952 --- 10.0.0.1 ping statistics --- 00:21:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.952 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1425690 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1425690 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1425690 ']' 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.952 13:49:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:35.952 [2024-06-11 13:49:28.737741] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:35.952 [2024-06-11 13:49:28.737800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.952 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.952 [2024-06-11 13:49:28.847312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.212 [2024-06-11 13:49:28.932028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.212 [2024-06-11 13:49:28.932080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.212 [2024-06-11 13:49:28.932094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.212 [2024-06-11 13:49:28.932106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.212 [2024-06-11 13:49:28.932115] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.212 [2024-06-11 13:49:28.932175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.212 [2024-06-11 13:49:28.932269] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.212 [2024-06-11 13:49:28.932384] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.212 [2024-06-11 13:49:28.932384] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.780 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:36.780 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:21:36.780 13:49:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.780 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:36.780 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.039 13:49:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 [2024-06-11 13:49:29.698751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 Malloc0 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 [2024-06-11 13:49:29.754685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:37.040 test case1: single bdev can't be used in multiple subsystems 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 [2024-06-11 13:49:29.778539] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:37.040 [2024-06-11 13:49:29.778566] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:37.040 [2024-06-11 13:49:29.778580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:37.040 request: 00:21:37.040 { 00:21:37.040 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:37.040 "namespace": { 00:21:37.040 "bdev_name": "Malloc0", 00:21:37.040 "no_auto_visible": false 00:21:37.040 }, 00:21:37.040 "method": "nvmf_subsystem_add_ns", 00:21:37.040 "req_id": 1 00:21:37.040 } 00:21:37.040 Got JSON-RPC error response 00:21:37.040 response: 00:21:37.040 { 00:21:37.040 "code": -32602, 00:21:37.040 "message": "Invalid parameters" 00:21:37.040 } 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:37.040 Adding namespace failed - expected result. 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:37.040 test case2: host connect to nvmf target in multiple paths 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:37.040 [2024-06-11 13:49:29.794713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.040 13:49:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:38.418 13:49:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:39.801 13:49:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:39.801 13:49:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:21:39.801 13:49:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:39.801 13:49:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:21:39.801 13:49:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:21:41.741 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:21:41.742 13:49:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:41.742 [global] 00:21:41.742 thread=1 00:21:41.742 invalidate=1 00:21:41.742 rw=write 00:21:41.742 time_based=1 00:21:41.742 runtime=1 00:21:41.742 ioengine=libaio 00:21:41.742 direct=1 00:21:41.742 bs=4096 00:21:41.742 iodepth=1 00:21:41.742 norandommap=0 00:21:41.742 numjobs=1 00:21:41.742 00:21:41.742 verify_dump=1 00:21:41.742 verify_backlog=512 00:21:41.742 verify_state_save=0 00:21:41.742 do_verify=1 00:21:41.742 verify=crc32c-intel 00:21:41.742 [job0] 00:21:41.742 filename=/dev/nvme0n1 00:21:41.742 Could not set queue depth (nvme0n1) 00:21:42.000 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:42.000 fio-3.35 00:21:42.000 Starting 1 thread 00:21:43.380 00:21:43.380 job0: (groupid=0, jobs=1): err= 0: pid=1426894: Tue Jun 11 13:49:36 2024 00:21:43.380 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:21:43.380 slat (nsec): min=11914, max=27704, avg=24136.81, stdev=3203.46 00:21:43.380 clat (usec): min=40765, max=41285, avg=40981.68, stdev=113.38 00:21:43.380 lat (usec): min=40793, max=41297, avg=41005.82, stdev=111.36 00:21:43.380 clat percentiles (usec): 00:21:43.380 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:43.380 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:43.380 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:43.380 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:43.380 | 99.99th=[41157] 00:21:43.380 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:21:43.380 slat (usec): min=12, max=27217, avg=66.65, stdev=1202.28 00:21:43.380 clat (usec): min=208, max=1089, avg=254.46, stdev=53.02 00:21:43.380 lat (usec): min=220, max=27651, avg=321.11, stdev=1211.34 00:21:43.380 clat percentiles (usec): 00:21:43.380 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:21:43.380 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 251], 00:21:43.380 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:21:43.380 | 99.00th=[ 277], 99.50th=[ 441], 99.90th=[ 1090], 99.95th=[ 1090], 00:21:43.380 | 99.99th=[ 1090] 00:21:43.380 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:21:43.380 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:43.380 lat (usec) : 250=48.41%, 500=47.28% 00:21:43.380 lat (msec) : 2=0.38%, 50=3.94% 00:21:43.380 cpu : usr=0.97%, sys=0.49%, ctx=535, majf=0, minf=2 00:21:43.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.380 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:43.380 00:21:43.380 Run status group 0 (all jobs): 00:21:43.380 READ: bw=81.8KiB/s (83.8kB/s), 81.8KiB/s-81.8KiB/s (83.8kB/s-83.8kB/s), io=84.0KiB (86.0kB), run=1027-1027msec 00:21:43.380 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:21:43.380 00:21:43.380 Disk stats (read/write): 00:21:43.380 nvme0n1: ios=43/512, merge=0/0, ticks=1684/123, in_queue=1807, util=98.90% 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:43.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.380 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.380 rmmod nvme_tcp 00:21:43.640 rmmod nvme_fabrics 00:21:43.640 rmmod nvme_keyring 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1425690 ']' 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1425690 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1425690 ']' 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1425690 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1425690 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1425690' 00:21:43.640 killing process with pid 1425690 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1425690 00:21:43.640 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1425690 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.900 13:49:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.808 13:49:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.808 00:21:45.808 real 0m17.291s 00:21:45.808 user 0m43.211s 00:21:45.808 sys 0m6.587s 00:21:45.808 13:49:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:45.808 13:49:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:45.808 ************************************ 00:21:45.808 END TEST nvmf_nmic 00:21:45.808 ************************************ 00:21:46.068 13:49:38 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:46.068 13:49:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:46.068 13:49:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:46.068 13:49:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 ************************************ 00:21:46.068 START TEST nvmf_fio_target 00:21:46.068 ************************************ 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:46.068 * Looking for test storage... 00:21:46.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.068 13:49:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:54.193 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:54.193 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.193 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:54.193 Found net devices under 0000:af:00.0: cvl_0_0 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:54.194 Found net devices under 0000:af:00.1: cvl_0_1 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.194 13:49:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:54.194 00:21:54.194 --- 10.0.0.2 ping statistics --- 00:21:54.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.194 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:54.194 00:21:54.194 --- 10.0.0.1 ping statistics --- 00:21:54.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.194 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1430869 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1430869 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1430869 ']' 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:54.194 13:49:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.194 [2024-06-11 13:49:46.199798] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:54.194 [2024-06-11 13:49:46.199856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.194 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.194 [2024-06-11 13:49:46.309440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.194 [2024-06-11 13:49:46.396735] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.194 [2024-06-11 13:49:46.396777] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.194 [2024-06-11 13:49:46.396791] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.194 [2024-06-11 13:49:46.396803] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.194 [2024-06-11 13:49:46.396813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.194 [2024-06-11 13:49:46.396875] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.194 [2024-06-11 13:49:46.396969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.194 [2024-06-11 13:49:46.397078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.194 [2024-06-11 13:49:46.397078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.453 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:54.712 [2024-06-11 13:49:47.364399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.712 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:54.971 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:54.971 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:54.971 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:54.971 13:49:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:55.231 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:55.231 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:55.490 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:55.490 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:55.749 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.008 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:56.008 13:49:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.268 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:56.268 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.527 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:56.527 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:56.786 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:57.045 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:57.045 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:57.305 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:57.305 13:49:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:57.305 13:49:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.564 [2024-06-11 13:49:50.393353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.564 13:49:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:57.823 13:49:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:58.082 13:49:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:21:59.461 13:49:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:22:01.365 13:49:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:01.365 [global] 00:22:01.365 thread=1 00:22:01.365 invalidate=1 00:22:01.365 rw=write 00:22:01.365 time_based=1 00:22:01.365 runtime=1 00:22:01.365 ioengine=libaio 00:22:01.365 direct=1 00:22:01.365 bs=4096 00:22:01.365 iodepth=1 00:22:01.365 norandommap=0 00:22:01.365 numjobs=1 00:22:01.365 00:22:01.365 verify_dump=1 00:22:01.365 verify_backlog=512 00:22:01.365 verify_state_save=0 00:22:01.365 do_verify=1 00:22:01.365 verify=crc32c-intel 00:22:01.365 [job0] 00:22:01.365 filename=/dev/nvme0n1 00:22:01.365 [job1] 00:22:01.365 filename=/dev/nvme0n2 00:22:01.365 [job2] 00:22:01.365 filename=/dev/nvme0n3 00:22:01.365 [job3] 00:22:01.365 filename=/dev/nvme0n4 00:22:01.653 Could not set queue depth (nvme0n1) 00:22:01.653 Could not set queue depth (nvme0n2) 00:22:01.653 Could not set queue depth (nvme0n3) 00:22:01.653 Could not set queue depth (nvme0n4) 00:22:01.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:01.919 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:01.919 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:01.919 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:01.919 fio-3.35 00:22:01.919 Starting 4 threads 00:22:03.297 00:22:03.297 job0: (groupid=0, jobs=1): err= 0: pid=1432494: Tue Jun 11 13:49:55 2024 00:22:03.297 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:22:03.297 slat (nsec): min=11336, max=24525, avg=23041.48, stdev=2709.59 00:22:03.297 clat (usec): min=40922, max=41033, avg=40975.87, stdev=27.60 00:22:03.297 lat (usec): min=40945, max=41056, avg=40998.92, stdev=27.40 00:22:03.297 clat percentiles (usec): 00:22:03.297 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:03.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:22:03.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:03.297 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:22:03.297 | 99.99th=[41157] 00:22:03.297 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:22:03.297 slat (nsec): min=11603, max=41276, avg=12782.18, stdev=2338.07 00:22:03.297 clat (usec): min=195, max=1201, avg=265.61, stdev=74.09 00:22:03.297 lat (usec): min=207, max=1213, avg=278.39, stdev=74.19 00:22:03.297 clat percentiles (usec): 00:22:03.297 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:22:03.297 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 258], 00:22:03.297 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 355], 00:22:03.297 | 99.00th=[ 437], 99.50th=[ 857], 99.90th=[ 1205], 99.95th=[ 1205], 00:22:03.297 | 99.99th=[ 1205] 00:22:03.297 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:22:03.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:03.297 lat (usec) : 250=48.22%, 500=47.09%, 1000=0.56% 00:22:03.297 lat (msec) : 2=0.19%, 50=3.94% 00:22:03.297 cpu : usr=0.60%, sys=0.80%, ctx=533, majf=0, minf=2 00:22:03.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.297 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:03.297 job1: (groupid=0, jobs=1): err= 0: pid=1432506: Tue Jun 11 13:49:55 2024 00:22:03.297 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:22:03.297 slat (nsec): min=9468, max=11992, avg=10567.27, stdev=606.82 00:22:03.297 clat (usec): min=447, max=41511, avg=39175.26, stdev=8651.06 00:22:03.297 lat (usec): min=457, max=41521, avg=39185.83, stdev=8651.13 00:22:03.297 clat percentiles (usec): 00:22:03.297 | 1.00th=[ 449], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:22:03.297 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:22:03.297 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:03.297 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:22:03.297 | 99.99th=[41681] 00:22:03.297 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:22:03.297 slat (usec): min=12, max=18927, avg=51.19, stdev=835.85 00:22:03.297 clat (usec): min=179, max=1244, avg=252.52, stdev=72.72 00:22:03.297 lat (usec): min=192, max=19264, avg=303.71, stdev=842.76 00:22:03.297 clat percentiles (usec): 00:22:03.297 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 208], 20.00th=[ 221], 00:22:03.297 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 247], 00:22:03.297 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 326], 00:22:03.297 | 99.00th=[ 367], 99.50th=[ 816], 99.90th=[ 1237], 99.95th=[ 1237], 00:22:03.297 | 99.99th=[ 1237] 00:22:03.297 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:22:03.297 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:03.297 lat (usec) : 250=60.30%, 500=35.02%, 750=0.19%, 1000=0.37% 00:22:03.297 lat (msec) : 2=0.19%, 50=3.93% 00:22:03.297 cpu : usr=0.69%, sys=0.79%, ctx=536, majf=0, minf=1 00:22:03.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.297 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:03.297 job2: (groupid=0, jobs=1): err= 0: pid=1432525: Tue Jun 11 13:49:55 2024 00:22:03.297 read: IOPS=1401, BW=5606KiB/s (5741kB/s)(5612KiB/1001msec) 00:22:03.297 slat (nsec): min=8869, max=32295, avg=9651.69, stdev=1077.90 00:22:03.297 clat (usec): min=381, max=539, avg=430.68, stdev=26.84 00:22:03.297 lat (usec): min=391, max=548, avg=440.33, stdev=26.84 00:22:03.297 clat percentiles (usec): 00:22:03.297 | 1.00th=[ 396], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 412], 00:22:03.298 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 424], 60.00th=[ 429], 00:22:03.298 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 469], 95.00th=[ 494], 00:22:03.298 | 99.00th=[ 515], 99.50th=[ 519], 99.90th=[ 529], 99.95th=[ 537], 00:22:03.298 | 99.99th=[ 537] 00:22:03.298 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:03.298 slat (nsec): min=11612, max=39135, avg=12628.80, stdev=1671.96 00:22:03.298 clat (usec): min=198, max=459, avg=232.05, stdev=23.21 00:22:03.298 lat (usec): min=210, max=498, avg=244.68, stdev=23.54 00:22:03.298 clat percentiles (usec): 00:22:03.298 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 212], 00:22:03.298 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:22:03.298 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:22:03.298 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 461], 00:22:03.298 | 99.99th=[ 461] 00:22:03.298 bw ( KiB/s): min= 8192, max= 8192, per=69.33%, avg=8192.00, stdev= 0.00, samples=1 00:22:03.298 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:03.298 lat (usec) : 250=39.37%, 500=58.93%, 750=1.70% 00:22:03.298 cpu : usr=1.60%, sys=3.80%, ctx=2939, majf=0, minf=1 00:22:03.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.298 issued rwts: total=1403,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:03.298 job3: (groupid=0, jobs=1): err= 0: pid=1432532: Tue Jun 11 13:49:55 2024 00:22:03.298 read: IOPS=25, BW=100KiB/s (102kB/s)(104KiB/1040msec) 00:22:03.298 slat (nsec): min=9744, max=26564, avg=22327.38, stdev=5757.75 00:22:03.298 clat (usec): min=351, max=41525, avg=34737.23, stdev=14937.37 00:22:03.298 lat (usec): min=361, max=41537, avg=34759.56, stdev=14940.76 00:22:03.298 clat percentiles (usec): 00:22:03.298 | 1.00th=[ 351], 5.00th=[ 388], 10.00th=[ 404], 20.00th=[40633], 00:22:03.298 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:22:03.298 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:03.298 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:22:03.298 | 99.99th=[41681] 00:22:03.298 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:22:03.298 slat (nsec): min=12431, max=40788, avg=13708.03, stdev=1974.71 00:22:03.298 clat (usec): min=195, max=410, avg=244.24, stdev=24.65 00:22:03.298 lat (usec): min=208, max=425, avg=257.95, stdev=25.10 00:22:03.298 clat percentiles (usec): 00:22:03.298 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:22:03.298 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:22:03.298 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 289], 00:22:03.298 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 412], 99.95th=[ 412], 00:22:03.298 | 99.99th=[ 412] 00:22:03.298 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:22:03.298 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:03.298 lat (usec) : 250=64.13%, 500=31.78% 00:22:03.298 lat (msec) : 50=4.09% 00:22:03.298 cpu : usr=0.38%, sys=1.06%, ctx=541, majf=0, minf=1 00:22:03.298 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:03.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.298 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.298 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:03.298 00:22:03.298 Run status group 0 (all jobs): 00:22:03.298 READ: bw=5662KiB/s (5797kB/s), 83.6KiB/s-5606KiB/s (85.6kB/s-5741kB/s), io=5888KiB (6029kB), run=1001-1040msec 00:22:03.298 WRITE: bw=11.5MiB/s (12.1MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1040msec 00:22:03.298 00:22:03.298 Disk stats (read/write): 00:22:03.298 nvme0n1: ios=66/512, merge=0/0, ticks=678/130, in_queue=808, util=84.37% 00:22:03.298 nvme0n2: ios=73/512, merge=0/0, ticks=938/124, in_queue=1062, util=88.62% 00:22:03.298 nvme0n3: ios=1081/1488, merge=0/0, ticks=516/340, in_queue=856, util=92.52% 00:22:03.298 nvme0n4: ios=76/512, merge=0/0, ticks=889/117, in_queue=1006, util=93.82% 00:22:03.298 13:49:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:03.298 [global] 00:22:03.298 thread=1 00:22:03.298 invalidate=1 00:22:03.298 rw=randwrite 00:22:03.298 time_based=1 00:22:03.298 runtime=1 00:22:03.298 ioengine=libaio 00:22:03.298 direct=1 00:22:03.298 bs=4096 00:22:03.298 iodepth=1 00:22:03.298 norandommap=0 00:22:03.298 numjobs=1 00:22:03.298 00:22:03.298 verify_dump=1 00:22:03.298 verify_backlog=512 00:22:03.298 verify_state_save=0 00:22:03.298 do_verify=1 00:22:03.298 verify=crc32c-intel 00:22:03.298 [job0] 00:22:03.298 filename=/dev/nvme0n1 00:22:03.298 [job1] 00:22:03.298 filename=/dev/nvme0n2 00:22:03.298 [job2] 00:22:03.298 filename=/dev/nvme0n3 00:22:03.298 [job3] 00:22:03.298 filename=/dev/nvme0n4 00:22:03.298 Could not set queue depth (nvme0n1) 00:22:03.298 Could not set queue depth (nvme0n2) 00:22:03.298 Could not set queue depth (nvme0n3) 00:22:03.298 Could not set queue depth (nvme0n4) 00:22:03.576 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.576 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.576 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.576 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:03.576 fio-3.35 00:22:03.576 Starting 4 threads 00:22:04.958 00:22:04.958 job0: (groupid=0, jobs=1): err= 0: pid=1432929: Tue Jun 11 13:49:57 2024 00:22:04.958 read: IOPS=1237, BW=4951KiB/s (5070kB/s)(4956KiB/1001msec) 00:22:04.958 slat (nsec): min=8673, max=27291, avg=9502.45, stdev=1315.42 00:22:04.958 clat (usec): min=347, max=619, avg=452.51, stdev=52.25 00:22:04.958 lat (usec): min=356, max=628, avg=462.01, stdev=52.28 00:22:04.958 clat percentiles (usec): 00:22:04.958 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 400], 00:22:04.958 | 30.00th=[ 416], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 469], 00:22:04.958 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 537], 00:22:04.958 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 611], 99.95th=[ 619], 00:22:04.958 | 99.99th=[ 619] 00:22:04.958 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:04.958 slat (nsec): min=11906, max=46609, avg=13114.57, stdev=1707.23 00:22:04.958 clat (usec): min=215, max=468, avg=259.14, stdev=29.11 00:22:04.958 lat (usec): min=228, max=480, avg=272.25, stdev=29.43 00:22:04.958 clat percentiles (usec): 00:22:04.958 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:22:04.958 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:22:04.958 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 297], 00:22:04.958 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 461], 99.95th=[ 469], 00:22:04.958 | 99.99th=[ 469] 00:22:04.958 bw ( KiB/s): min= 7320, max= 7320, per=31.85%, avg=7320.00, stdev= 0.00, samples=1 00:22:04.958 iops : min= 1830, max= 1830, avg=1830.00, stdev= 0.00, samples=1 00:22:04.958 lat (usec) : 250=20.97%, 500=70.38%, 750=8.65% 00:22:04.958 cpu : usr=2.60%, sys=5.00%, ctx=2776, majf=0, minf=1 00:22:04.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.958 issued rwts: total=1239,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.958 job1: (groupid=0, jobs=1): err= 0: pid=1432942: Tue Jun 11 13:49:57 2024 00:22:04.958 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:22:04.958 slat (nsec): min=8797, max=26276, avg=9683.48, stdev=1401.52 00:22:04.958 clat (usec): min=378, max=41934, avg=655.88, stdev=2551.72 00:22:04.958 lat (usec): min=388, max=41944, avg=665.56, stdev=2551.72 00:22:04.958 clat percentiles (usec): 00:22:04.958 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 445], 00:22:04.958 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 486], 60.00th=[ 510], 00:22:04.958 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 611], 00:22:04.958 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[41681], 99.95th=[41681], 00:22:04.958 | 99.99th=[41681] 00:22:04.958 write: IOPS=1141, BW=4567KiB/s (4677kB/s)(4572KiB/1001msec); 0 zone resets 00:22:04.958 slat (nsec): min=11773, max=49621, avg=13218.12, stdev=2341.41 00:22:04.958 clat (usec): min=209, max=456, avg=258.35, stdev=30.31 00:22:04.958 lat (usec): min=221, max=469, avg=271.56, stdev=30.79 00:22:04.959 clat percentiles (usec): 00:22:04.959 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:22:04.959 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:22:04.959 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:22:04.959 | 99.00th=[ 392], 99.50th=[ 396], 99.90th=[ 429], 99.95th=[ 457], 00:22:04.959 | 99.99th=[ 457] 00:22:04.959 bw ( KiB/s): min= 4240, max= 4240, per=18.45%, avg=4240.00, stdev= 0.00, samples=1 00:22:04.959 iops : min= 1060, max= 1060, avg=1060.00, stdev= 0.00, samples=1 00:22:04.959 lat (usec) : 250=23.77%, 500=54.91%, 750=21.14% 00:22:04.959 lat (msec) : 50=0.18% 00:22:04.959 cpu : usr=2.60%, sys=3.20%, ctx=2169, majf=0, minf=1 00:22:04.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 issued rwts: total=1024,1143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.959 job2: (groupid=0, jobs=1): err= 0: pid=1432963: Tue Jun 11 13:49:57 2024 00:22:04.959 read: IOPS=1267, BW=5071KiB/s (5193kB/s)(5076KiB/1001msec) 00:22:04.959 slat (nsec): min=9186, max=24502, avg=10199.77, stdev=1321.72 00:22:04.959 clat (usec): min=342, max=691, avg=431.08, stdev=35.31 00:22:04.959 lat (usec): min=352, max=700, avg=441.28, stdev=35.32 00:22:04.959 clat percentiles (usec): 00:22:04.959 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:22:04.959 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 424], 60.00th=[ 433], 00:22:04.959 | 70.00th=[ 437], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 498], 00:22:04.959 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 693], 00:22:04.959 | 99.99th=[ 693] 00:22:04.959 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:04.959 slat (nsec): min=12491, max=46461, avg=13704.28, stdev=1932.30 00:22:04.959 clat (usec): min=215, max=568, avg=266.39, stdev=21.18 00:22:04.959 lat (usec): min=228, max=581, avg=280.10, stdev=21.52 00:22:04.959 clat percentiles (usec): 00:22:04.959 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:22:04.959 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:22:04.959 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:22:04.959 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 482], 99.95th=[ 570], 00:22:04.959 | 99.99th=[ 570] 00:22:04.959 bw ( KiB/s): min= 7648, max= 7648, per=33.28%, avg=7648.00, stdev= 0.00, samples=1 00:22:04.959 iops : min= 1912, max= 1912, avg=1912.00, stdev= 0.00, samples=1 00:22:04.959 lat (usec) : 250=11.23%, 500=86.77%, 750=2.00% 00:22:04.959 cpu : usr=2.80%, sys=5.00%, ctx=2806, majf=0, minf=2 00:22:04.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 issued rwts: total=1269,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.959 job3: (groupid=0, jobs=1): err= 0: pid=1432971: Tue Jun 11 13:49:57 2024 00:22:04.959 read: IOPS=1261, BW=5047KiB/s (5168kB/s)(5052KiB/1001msec) 00:22:04.959 slat (nsec): min=8937, max=23244, avg=9648.20, stdev=1098.15 00:22:04.959 clat (usec): min=355, max=676, avg=433.48, stdev=35.04 00:22:04.959 lat (usec): min=365, max=685, avg=443.13, stdev=35.10 00:22:04.959 clat percentiles (usec): 00:22:04.959 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:22:04.959 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 433], 00:22:04.959 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 506], 00:22:04.959 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 603], 99.95th=[ 676], 00:22:04.959 | 99.99th=[ 676] 00:22:04.959 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:04.959 slat (nsec): min=12290, max=61561, avg=13445.59, stdev=2093.23 00:22:04.959 clat (usec): min=217, max=448, avg=266.88, stdev=18.39 00:22:04.959 lat (usec): min=230, max=465, avg=280.32, stdev=18.86 00:22:04.959 clat percentiles (usec): 00:22:04.959 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:22:04.959 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:22:04.959 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:22:04.959 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 404], 99.95th=[ 449], 00:22:04.959 | 99.99th=[ 449] 00:22:04.959 bw ( KiB/s): min= 7560, max= 7560, per=32.90%, avg=7560.00, stdev= 0.00, samples=1 00:22:04.959 iops : min= 1890, max= 1890, avg=1890.00, stdev= 0.00, samples=1 00:22:04.959 lat (usec) : 250=8.54%, 500=88.78%, 750=2.68% 00:22:04.959 cpu : usr=3.50%, sys=4.20%, ctx=2801, majf=0, minf=1 00:22:04.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:04.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.959 issued rwts: total=1263,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:04.959 00:22:04.959 Run status group 0 (all jobs): 00:22:04.959 READ: bw=18.7MiB/s (19.6MB/s), 4092KiB/s-5071KiB/s (4190kB/s-5193kB/s), io=18.7MiB (19.6MB), run=1001-1001msec 00:22:04.959 WRITE: bw=22.4MiB/s (23.5MB/s), 4567KiB/s-6138KiB/s (4677kB/s-6285kB/s), io=22.5MiB (23.6MB), run=1001-1001msec 00:22:04.959 00:22:04.959 Disk stats (read/write): 00:22:04.959 nvme0n1: ios=1052/1244, merge=0/0, ticks=1394/305, in_queue=1699, util=95.59% 00:22:04.959 nvme0n2: ios=781/1024, merge=0/0, ticks=1210/255, in_queue=1465, util=98.87% 00:22:04.959 nvme0n3: ios=1053/1306, merge=0/0, ticks=1354/329, in_queue=1683, util=99.89% 00:22:04.959 nvme0n4: ios=1048/1296, merge=0/0, ticks=1342/330, in_queue=1672, util=100.00% 00:22:04.959 13:49:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:04.959 [global] 00:22:04.959 thread=1 00:22:04.959 invalidate=1 00:22:04.959 rw=write 00:22:04.959 time_based=1 00:22:04.959 runtime=1 00:22:04.959 ioengine=libaio 00:22:04.959 direct=1 00:22:04.959 bs=4096 00:22:04.959 iodepth=128 00:22:04.959 norandommap=0 00:22:04.959 numjobs=1 00:22:04.959 00:22:04.959 verify_dump=1 00:22:04.959 verify_backlog=512 00:22:04.959 verify_state_save=0 00:22:04.959 do_verify=1 00:22:04.959 verify=crc32c-intel 00:22:04.959 [job0] 00:22:04.959 filename=/dev/nvme0n1 00:22:04.959 [job1] 00:22:04.959 filename=/dev/nvme0n2 00:22:04.959 [job2] 00:22:04.959 filename=/dev/nvme0n3 00:22:04.959 [job3] 00:22:04.959 filename=/dev/nvme0n4 00:22:04.959 Could not set queue depth (nvme0n1) 00:22:04.959 Could not set queue depth (nvme0n2) 00:22:04.959 Could not set queue depth (nvme0n3) 00:22:04.959 Could not set queue depth (nvme0n4) 00:22:05.253 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:05.253 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:05.253 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:05.253 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:05.253 fio-3.35 00:22:05.253 Starting 4 threads 00:22:06.634 00:22:06.634 job0: (groupid=0, jobs=1): err= 0: pid=1433344: Tue Jun 11 13:49:59 2024 00:22:06.634 read: IOPS=3042, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:22:06.634 slat (usec): min=2, max=31930, avg=197.61, stdev=1520.48 00:22:06.634 clat (msec): min=3, max=100, avg=25.25, stdev=22.99 00:22:06.634 lat (msec): min=6, max=100, avg=25.45, stdev=23.13 00:22:06.634 clat percentiles (msec): 00:22:06.634 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:22:06.634 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:22:06.634 | 70.00th=[ 22], 80.00th=[ 34], 90.00th=[ 67], 95.00th=[ 90], 00:22:06.634 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:22:06.634 | 99.99th=[ 102] 00:22:06.634 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:22:06.634 slat (usec): min=3, max=18296, avg=121.93, stdev=784.40 00:22:06.634 clat (usec): min=1556, max=37352, avg=16385.37, stdev=6733.83 00:22:06.634 lat (usec): min=1569, max=37364, avg=16507.29, stdev=6755.25 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 6325], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[12387], 00:22:06.634 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[15926], 00:22:06.634 | 70.00th=[19268], 80.00th=[21103], 90.00th=[26084], 95.00th=[30016], 00:22:06.634 | 99.00th=[35914], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:22:06.634 | 99.99th=[37487] 00:22:06.634 bw ( KiB/s): min= 6200, max=18376, per=20.43%, avg=12288.00, stdev=8609.73, samples=2 00:22:06.634 iops : min= 1550, max= 4594, avg=3072.00, stdev=2152.43, samples=2 00:22:06.634 lat (msec) : 2=0.08%, 4=0.02%, 10=10.02%, 20=61.47%, 50=20.94% 00:22:06.634 lat (msec) : 100=6.83%, 250=0.64% 00:22:06.634 cpu : usr=3.08%, sys=3.88%, ctx=348, majf=0, minf=1 00:22:06.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.634 issued rwts: total=3064,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.634 job1: (groupid=0, jobs=1): err= 0: pid=1433358: Tue Jun 11 13:49:59 2024 00:22:06.634 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:22:06.634 slat (usec): min=2, max=12441, avg=90.16, stdev=651.33 00:22:06.634 clat (usec): min=4149, max=66277, avg=13136.30, stdev=5788.48 00:22:06.634 lat (usec): min=4157, max=66288, avg=13226.46, stdev=5838.37 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 5866], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9241], 00:22:06.634 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[12780], 00:22:06.634 | 70.00th=[13698], 80.00th=[16712], 90.00th=[22414], 95.00th=[23200], 00:22:06.634 | 99.00th=[29230], 99.50th=[42206], 99.90th=[66323], 99.95th=[66323], 00:22:06.634 | 99.99th=[66323] 00:22:06.634 write: IOPS=4211, BW=16.4MiB/s (17.2MB/s)(16.7MiB/1014msec); 0 zone resets 00:22:06.634 slat (usec): min=3, max=58576, avg=131.10, stdev=1183.67 00:22:06.634 clat (usec): min=1313, max=131836, avg=17042.22, stdev=21155.12 00:22:06.634 lat (usec): min=1324, max=131848, avg=17173.32, stdev=21264.75 00:22:06.634 clat percentiles (msec): 00:22:06.634 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:22:06.634 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:22:06.634 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 28], 95.00th=[ 69], 00:22:06.634 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 132], 00:22:06.634 | 99.99th=[ 132] 00:22:06.634 bw ( KiB/s): min=10240, max=22896, per=27.55%, avg=16568.00, stdev=8949.14, samples=2 00:22:06.634 iops : min= 2560, max= 5724, avg=4142.00, stdev=2237.29, samples=2 00:22:06.634 lat (msec) : 2=0.24%, 4=0.66%, 10=35.44%, 20=48.52%, 50=11.52% 00:22:06.634 lat (msec) : 100=2.59%, 250=1.03% 00:22:06.634 cpu : usr=5.53%, sys=6.02%, ctx=475, majf=0, minf=1 00:22:06.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.634 issued rwts: total=4096,4270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.634 job2: (groupid=0, jobs=1): err= 0: pid=1433384: Tue Jun 11 13:49:59 2024 00:22:06.634 read: IOPS=4172, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1008msec) 00:22:06.634 slat (usec): min=2, max=13404, avg=112.46, stdev=788.88 00:22:06.634 clat (usec): min=3691, max=27880, avg=15550.54, stdev=3716.42 00:22:06.634 lat (usec): min=3708, max=27908, avg=15663.01, stdev=3742.71 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 7439], 5.00th=[11076], 10.00th=[11994], 20.00th=[12780], 00:22:06.634 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14353], 60.00th=[14877], 00:22:06.634 | 70.00th=[16909], 80.00th=[19268], 90.00th=[21103], 95.00th=[22676], 00:22:06.634 | 99.00th=[26084], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:22:06.634 | 99.99th=[27919] 00:22:06.634 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:22:06.634 slat (usec): min=4, max=21255, avg=92.92, stdev=667.91 00:22:06.634 clat (usec): min=4249, max=38937, avg=13510.16, stdev=5557.37 00:22:06.634 lat (usec): min=4840, max=38952, avg=13603.08, stdev=5596.68 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 5342], 5.00th=[ 6849], 10.00th=[ 7898], 20.00th=[ 9110], 00:22:06.634 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13435], 60.00th=[14353], 00:22:06.634 | 70.00th=[14615], 80.00th=[14877], 90.00th=[17957], 95.00th=[23462], 00:22:06.634 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:22:06.634 | 99.99th=[39060] 00:22:06.634 bw ( KiB/s): min=16816, max=19904, per=30.53%, avg=18360.00, stdev=2183.55, samples=2 00:22:06.634 iops : min= 4204, max= 4976, avg=4590.00, stdev=545.89, samples=2 00:22:06.634 lat (msec) : 4=0.02%, 10=14.35%, 20=76.12%, 50=9.51% 00:22:06.634 cpu : usr=4.37%, sys=9.43%, ctx=442, majf=0, minf=1 00:22:06.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.634 issued rwts: total=4206,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.634 job3: (groupid=0, jobs=1): err= 0: pid=1433394: Tue Jun 11 13:49:59 2024 00:22:06.634 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:22:06.634 slat (usec): min=3, max=15872, avg=142.26, stdev=1015.80 00:22:06.634 clat (usec): min=2283, max=43702, avg=18602.24, stdev=6477.40 00:22:06.634 lat (usec): min=2300, max=51012, avg=18744.50, stdev=6553.96 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 5604], 5.00th=[ 8848], 10.00th=[14877], 20.00th=[15401], 00:22:06.634 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16581], 60.00th=[17433], 00:22:06.634 | 70.00th=[19268], 80.00th=[22676], 90.00th=[26870], 95.00th=[32637], 00:22:06.634 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:22:06.634 | 99.99th=[43779] 00:22:06.634 write: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.9MiB/1013msec); 0 zone resets 00:22:06.634 slat (usec): min=3, max=11386, avg=142.28, stdev=722.14 00:22:06.634 clat (usec): min=938, max=64089, avg=21698.48, stdev=11145.09 00:22:06.634 lat (usec): min=955, max=64101, avg=21840.76, stdev=11219.48 00:22:06.634 clat percentiles (usec): 00:22:06.634 | 1.00th=[ 2474], 5.00th=[ 6128], 10.00th=[ 8291], 20.00th=[11338], 00:22:06.634 | 30.00th=[12911], 40.00th=[14746], 50.00th=[21103], 60.00th=[26870], 00:22:06.634 | 70.00th=[29492], 80.00th=[32637], 90.00th=[37487], 95.00th=[39060], 00:22:06.634 | 99.00th=[47449], 99.50th=[47449], 99.90th=[57934], 99.95th=[57934], 00:22:06.634 | 99.99th=[64226] 00:22:06.634 bw ( KiB/s): min=11336, max=14000, per=21.07%, avg=12668.00, stdev=1883.73, samples=2 00:22:06.634 iops : min= 2834, max= 3500, avg=3167.00, stdev=470.93, samples=2 00:22:06.634 lat (usec) : 1000=0.05% 00:22:06.634 lat (msec) : 2=0.31%, 4=0.66%, 10=8.51%, 20=50.88%, 50=39.48% 00:22:06.634 lat (msec) : 100=0.11% 00:22:06.634 cpu : usr=3.46%, sys=5.83%, ctx=353, majf=0, minf=1 00:22:06.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:06.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.634 issued rwts: total=3072,3294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.634 00:22:06.634 Run status group 0 (all jobs): 00:22:06.634 READ: bw=55.6MiB/s (58.3MB/s), 11.8MiB/s-16.3MiB/s (12.4MB/s-17.1MB/s), io=56.4MiB (59.1MB), run=1007-1014msec 00:22:06.634 WRITE: bw=58.7MiB/s (61.6MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.7MB/s), io=59.5MiB (62.4MB), run=1007-1014msec 00:22:06.634 00:22:06.634 Disk stats (read/write): 00:22:06.634 nvme0n1: ios=2738/3072, merge=0/0, ticks=16755/23107, in_queue=39862, util=81.56% 00:22:06.634 nvme0n2: ios=2920/3072, merge=0/0, ticks=28613/34674, in_queue=63287, util=96.49% 00:22:06.634 nvme0n3: ios=3218/3584, merge=0/0, ticks=48191/48445, in_queue=96636, util=97.27% 00:22:06.635 nvme0n4: ios=2167/2560, merge=0/0, ticks=37224/56417, in_queue=93641, util=88.98% 00:22:06.635 13:49:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:06.635 [global] 00:22:06.635 thread=1 00:22:06.635 invalidate=1 00:22:06.635 rw=randwrite 00:22:06.635 time_based=1 00:22:06.635 runtime=1 00:22:06.635 ioengine=libaio 00:22:06.635 direct=1 00:22:06.635 bs=4096 00:22:06.635 iodepth=128 00:22:06.635 norandommap=0 00:22:06.635 numjobs=1 00:22:06.635 00:22:06.635 verify_dump=1 00:22:06.635 verify_backlog=512 00:22:06.635 verify_state_save=0 00:22:06.635 do_verify=1 00:22:06.635 verify=crc32c-intel 00:22:06.635 [job0] 00:22:06.635 filename=/dev/nvme0n1 00:22:06.635 [job1] 00:22:06.635 filename=/dev/nvme0n2 00:22:06.635 [job2] 00:22:06.635 filename=/dev/nvme0n3 00:22:06.635 [job3] 00:22:06.635 filename=/dev/nvme0n4 00:22:06.635 Could not set queue depth (nvme0n1) 00:22:06.635 Could not set queue depth (nvme0n2) 00:22:06.635 Could not set queue depth (nvme0n3) 00:22:06.635 Could not set queue depth (nvme0n4) 00:22:06.894 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:06.894 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:06.894 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:06.894 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:06.894 fio-3.35 00:22:06.894 Starting 4 threads 00:22:08.271 00:22:08.271 job0: (groupid=0, jobs=1): err= 0: pid=1433786: Tue Jun 11 13:50:00 2024 00:22:08.271 read: IOPS=2818, BW=11.0MiB/s (11.5MB/s)(11.1MiB/1004msec) 00:22:08.271 slat (usec): min=3, max=11754, avg=178.16, stdev=937.68 00:22:08.271 clat (usec): min=606, max=43961, avg=22392.84, stdev=9086.97 00:22:08.272 lat (usec): min=4177, max=43978, avg=22571.00, stdev=9105.75 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 4555], 5.00th=[12256], 10.00th=[13960], 20.00th=[15139], 00:22:08.272 | 30.00th=[16712], 40.00th=[18220], 50.00th=[19268], 60.00th=[20579], 00:22:08.272 | 70.00th=[25822], 80.00th=[30802], 90.00th=[39060], 95.00th=[42206], 00:22:08.272 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:22:08.272 | 99.99th=[43779] 00:22:08.272 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:22:08.272 slat (usec): min=4, max=8336, avg=151.91, stdev=772.74 00:22:08.272 clat (usec): min=10558, max=42926, avg=20422.58, stdev=5783.75 00:22:08.272 lat (usec): min=10932, max=42940, avg=20574.49, stdev=5790.71 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[11863], 5.00th=[13435], 10.00th=[13960], 20.00th=[15664], 00:22:08.272 | 30.00th=[16712], 40.00th=[17695], 50.00th=[19006], 60.00th=[20579], 00:22:08.272 | 70.00th=[22676], 80.00th=[25035], 90.00th=[27657], 95.00th=[33424], 00:22:08.272 | 99.00th=[37487], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:22:08.272 | 99.99th=[42730] 00:22:08.272 bw ( KiB/s): min=12288, max=12288, per=19.37%, avg=12288.00, stdev= 0.00, samples=2 00:22:08.272 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:22:08.272 lat (usec) : 750=0.02% 00:22:08.272 lat (msec) : 10=1.12%, 20=54.61%, 50=44.26% 00:22:08.272 cpu : usr=3.19%, sys=5.58%, ctx=327, majf=0, minf=1 00:22:08.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:08.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.272 issued rwts: total=2830,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.272 job1: (groupid=0, jobs=1): err= 0: pid=1433803: Tue Jun 11 13:50:00 2024 00:22:08.272 read: IOPS=4415, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1004msec) 00:22:08.272 slat (nsec): min=1688, max=12167k, avg=111556.95, stdev=720796.01 00:22:08.272 clat (usec): min=716, max=59966, avg=14392.73, stdev=5670.21 00:22:08.272 lat (usec): min=3465, max=66474, avg=14504.29, stdev=5715.65 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 4555], 5.00th=[ 8291], 10.00th=[ 9634], 20.00th=[11076], 00:22:08.272 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13435], 00:22:08.272 | 70.00th=[14877], 80.00th=[18744], 90.00th=[20841], 95.00th=[25560], 00:22:08.272 | 99.00th=[28705], 99.50th=[29754], 99.90th=[60031], 99.95th=[60031], 00:22:08.272 | 99.99th=[60031] 00:22:08.272 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:22:08.272 slat (usec): min=2, max=8129, avg=103.81, stdev=595.06 00:22:08.272 clat (usec): min=1706, max=33244, avg=13685.49, stdev=4063.43 00:22:08.272 lat (usec): min=1717, max=33267, avg=13789.30, stdev=4111.23 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[10945], 00:22:08.272 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12780], 00:22:08.272 | 70.00th=[15008], 80.00th=[17171], 90.00th=[19792], 95.00th=[21103], 00:22:08.272 | 99.00th=[26870], 99.50th=[26870], 99.90th=[28443], 99.95th=[28443], 00:22:08.272 | 99.99th=[33162] 00:22:08.272 bw ( KiB/s): min=16384, max=20480, per=29.06%, avg=18432.00, stdev=2896.31, samples=2 00:22:08.272 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:22:08.272 lat (usec) : 750=0.01% 00:22:08.272 lat (msec) : 2=0.09%, 4=0.07%, 10=12.07%, 20=76.22%, 50=11.36% 00:22:08.272 lat (msec) : 100=0.19% 00:22:08.272 cpu : usr=2.69%, sys=6.28%, ctx=402, majf=0, minf=1 00:22:08.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:08.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.272 issued rwts: total=4433,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.272 job2: (groupid=0, jobs=1): err= 0: pid=1433829: Tue Jun 11 13:50:00 2024 00:22:08.272 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:22:08.272 slat (usec): min=2, max=12811, avg=117.25, stdev=789.62 00:22:08.272 clat (usec): min=6145, max=52124, avg=15251.76, stdev=4610.71 00:22:08.272 lat (usec): min=6153, max=53517, avg=15369.01, stdev=4655.85 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 7111], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[12125], 00:22:08.272 | 30.00th=[12780], 40.00th=[14091], 50.00th=[15139], 60.00th=[15401], 00:22:08.272 | 70.00th=[15926], 80.00th=[16909], 90.00th=[20579], 95.00th=[23200], 00:22:08.272 | 99.00th=[28181], 99.50th=[32113], 99.90th=[52167], 99.95th=[52167], 00:22:08.272 | 99.99th=[52167] 00:22:08.272 write: IOPS=4130, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1004msec); 0 zone resets 00:22:08.272 slat (usec): min=2, max=22277, avg=116.58, stdev=787.07 00:22:08.272 clat (usec): min=708, max=44374, avg=15487.05, stdev=4583.39 00:22:08.272 lat (usec): min=5104, max=44407, avg=15603.63, stdev=4619.55 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 6128], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11863], 00:22:08.272 | 30.00th=[13173], 40.00th=[14615], 50.00th=[15139], 60.00th=[15401], 00:22:08.272 | 70.00th=[15795], 80.00th=[17957], 90.00th=[21627], 95.00th=[26608], 00:22:08.272 | 99.00th=[27657], 99.50th=[27919], 99.90th=[27919], 99.95th=[30802], 00:22:08.272 | 99.99th=[44303] 00:22:08.272 bw ( KiB/s): min=15712, max=17056, per=25.83%, avg=16384.00, stdev=950.35, samples=2 00:22:08.272 iops : min= 3928, max= 4264, avg=4096.00, stdev=237.59, samples=2 00:22:08.272 lat (usec) : 750=0.01% 00:22:08.272 lat (msec) : 10=7.92%, 20=77.98%, 50=13.91%, 100=0.17% 00:22:08.272 cpu : usr=3.29%, sys=6.68%, ctx=286, majf=0, minf=1 00:22:08.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:08.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.272 issued rwts: total=4096,4147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.272 job3: (groupid=0, jobs=1): err= 0: pid=1433834: Tue Jun 11 13:50:00 2024 00:22:08.272 read: IOPS=3843, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1002msec) 00:22:08.272 slat (usec): min=3, max=8261, avg=126.15, stdev=734.04 00:22:08.272 clat (usec): min=1457, max=26014, avg=16444.92, stdev=2653.54 00:22:08.272 lat (usec): min=5644, max=26525, avg=16571.07, stdev=2698.88 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 9503], 5.00th=[12256], 10.00th=[13829], 20.00th=[14484], 00:22:08.272 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16450], 60.00th=[16909], 00:22:08.272 | 70.00th=[17433], 80.00th=[18482], 90.00th=[19530], 95.00th=[21103], 00:22:08.272 | 99.00th=[22152], 99.50th=[23725], 99.90th=[24511], 99.95th=[26084], 00:22:08.272 | 99.99th=[26084] 00:22:08.272 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:22:08.272 slat (usec): min=4, max=12072, avg=115.22, stdev=738.58 00:22:08.272 clat (usec): min=1262, max=54608, avg=15575.32, stdev=4603.19 00:22:08.272 lat (usec): min=1447, max=54619, avg=15690.55, stdev=4615.65 00:22:08.272 clat percentiles (usec): 00:22:08.272 | 1.00th=[ 7046], 5.00th=[10552], 10.00th=[12911], 20.00th=[14091], 00:22:08.272 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:22:08.272 | 70.00th=[16188], 80.00th=[16450], 90.00th=[17171], 95.00th=[20055], 00:22:08.272 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:22:08.272 | 99.99th=[54789] 00:22:08.272 bw ( KiB/s): min=16384, max=16384, per=25.83%, avg=16384.00, stdev= 0.00, samples=2 00:22:08.272 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:22:08.272 lat (msec) : 2=0.05%, 10=2.97%, 20=90.10%, 50=6.87%, 100=0.01% 00:22:08.272 cpu : usr=5.39%, sys=7.29%, ctx=285, majf=0, minf=1 00:22:08.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:08.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:08.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:08.272 issued rwts: total=3851,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:08.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:08.272 00:22:08.272 Run status group 0 (all jobs): 00:22:08.272 READ: bw=59.2MiB/s (62.1MB/s), 11.0MiB/s-17.2MiB/s (11.5MB/s-18.1MB/s), io=59.4MiB (62.3MB), run=1002-1004msec 00:22:08.272 WRITE: bw=62.0MiB/s (65.0MB/s), 12.0MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=62.2MiB (65.2MB), run=1002-1004msec 00:22:08.272 00:22:08.272 Disk stats (read/write): 00:22:08.272 nvme0n1: ios=2077/2401, merge=0/0, ticks=13030/12341, in_queue=25371, util=100.00% 00:22:08.272 nvme0n2: ios=3159/3584, merge=0/0, ticks=21070/18974, in_queue=40044, util=82.05% 00:22:08.272 nvme0n3: ios=3130/3584, merge=0/0, ticks=21826/27364, in_queue=49190, util=95.94% 00:22:08.272 nvme0n4: ios=3072/3241, merge=0/0, ticks=24870/26166, in_queue=51036, util=88.90% 00:22:08.272 13:50:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:22:08.272 13:50:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1433963 00:22:08.272 13:50:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:08.272 13:50:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:22:08.272 [global] 00:22:08.272 thread=1 00:22:08.272 invalidate=1 00:22:08.272 rw=read 00:22:08.272 time_based=1 00:22:08.272 runtime=10 00:22:08.272 ioengine=libaio 00:22:08.272 direct=1 00:22:08.272 bs=4096 00:22:08.272 iodepth=1 00:22:08.272 norandommap=1 00:22:08.272 numjobs=1 00:22:08.272 00:22:08.272 [job0] 00:22:08.272 filename=/dev/nvme0n1 00:22:08.272 [job1] 00:22:08.272 filename=/dev/nvme0n2 00:22:08.272 [job2] 00:22:08.272 filename=/dev/nvme0n3 00:22:08.272 [job3] 00:22:08.272 filename=/dev/nvme0n4 00:22:08.272 Could not set queue depth (nvme0n1) 00:22:08.272 Could not set queue depth (nvme0n2) 00:22:08.272 Could not set queue depth (nvme0n3) 00:22:08.272 Could not set queue depth (nvme0n4) 00:22:08.837 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:08.837 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:08.837 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:08.837 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:08.837 fio-3.35 00:22:08.837 Starting 4 threads 00:22:11.371 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:11.371 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:11.629 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=618496, buflen=4096 00:22:11.629 fio: pid=1434258, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:11.629 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:11.629 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:11.629 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=14954496, buflen=4096 00:22:11.629 fio: pid=1434256, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:11.888 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28184576, buflen=4096 00:22:11.888 fio: pid=1434227, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:11.888 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:11.888 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:12.147 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5091328, buflen=4096 00:22:12.147 fio: pid=1434238, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:12.147 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:12.147 13:50:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:12.147 00:22:12.147 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1434227: Tue Jun 11 13:50:04 2024 00:22:12.147 read: IOPS=2213, BW=8853KiB/s (9065kB/s)(26.9MiB/3109msec) 00:22:12.147 slat (usec): min=8, max=11453, avg=15.48, stdev=236.55 00:22:12.147 clat (usec): min=259, max=2465, avg=431.34, stdev=54.57 00:22:12.147 lat (usec): min=268, max=12095, avg=446.82, stdev=246.67 00:22:12.147 clat percentiles (usec): 00:22:12.147 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 388], 20.00th=[ 404], 00:22:12.147 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 433], 60.00th=[ 445], 00:22:12.147 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 498], 00:22:12.147 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[ 652], 99.95th=[ 685], 00:22:12.147 | 99.99th=[ 2474] 00:22:12.147 bw ( KiB/s): min= 8279, max= 9856, per=62.73%, avg=8925.67, stdev=573.78, samples=6 00:22:12.147 iops : min= 2069, max= 2464, avg=2231.17, stdev=143.66, samples=6 00:22:12.147 lat (usec) : 500=95.52%, 750=4.43% 00:22:12.147 lat (msec) : 2=0.01%, 4=0.01% 00:22:12.147 cpu : usr=1.19%, sys=3.12%, ctx=6889, majf=0, minf=1 00:22:12.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.147 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.147 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1434238: Tue Jun 11 13:50:04 2024 00:22:12.147 read: IOPS=371, BW=1483KiB/s (1518kB/s)(4972KiB/3353msec) 00:22:12.147 slat (usec): min=8, max=13704, avg=21.48, stdev=388.29 00:22:12.147 clat (usec): min=224, max=42221, avg=2656.38, stdev=9448.18 00:22:12.147 lat (usec): min=233, max=55926, avg=2677.86, stdev=9505.83 00:22:12.147 clat percentiles (usec): 00:22:12.147 | 1.00th=[ 241], 5.00th=[ 277], 10.00th=[ 318], 20.00th=[ 330], 00:22:12.147 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:22:12.147 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 429], 95.00th=[40633], 00:22:12.147 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:12.147 | 99.99th=[42206] 00:22:12.147 bw ( KiB/s): min= 91, max= 4592, per=5.94%, avg=845.67, stdev=1835.33, samples=6 00:22:12.147 iops : min= 22, max= 1148, avg=211.17, stdev=458.95, samples=6 00:22:12.147 lat (usec) : 250=2.41%, 500=90.11%, 750=1.77% 00:22:12.147 lat (msec) : 50=5.63% 00:22:12.147 cpu : usr=0.21%, sys=0.60%, ctx=1247, majf=0, minf=1 00:22:12.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.147 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.147 issued rwts: total=1244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.147 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1434256: Tue Jun 11 13:50:05 2024 00:22:12.147 read: IOPS=1260, BW=5039KiB/s (5160kB/s)(14.3MiB/2898msec) 00:22:12.147 slat (usec): min=8, max=189, avg=11.44, stdev=12.18 00:22:12.147 clat (usec): min=201, max=41964, avg=774.89, stdev=3964.55 00:22:12.147 lat (usec): min=317, max=41990, avg=786.32, stdev=3965.73 00:22:12.147 clat percentiles (usec): 00:22:12.148 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:22:12.148 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:22:12.148 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 449], 95.00th=[ 498], 00:22:12.148 | 99.00th=[ 660], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:22:12.148 | 99.99th=[42206] 00:22:12.148 bw ( KiB/s): min= 96, max=10984, per=40.92%, avg=5822.00, stdev=5292.08, samples=5 00:22:12.148 iops : min= 24, max= 2746, avg=1455.40, stdev=1322.95, samples=5 00:22:12.148 lat (usec) : 250=0.14%, 500=95.10%, 750=3.75% 00:22:12.148 lat (msec) : 4=0.03%, 50=0.96% 00:22:12.148 cpu : usr=0.69%, sys=1.48%, ctx=3654, majf=0, minf=1 00:22:12.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.148 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.148 issued rwts: total=3652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.148 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1434258: Tue Jun 11 13:50:05 2024 00:22:12.148 read: IOPS=56, BW=226KiB/s (232kB/s)(604KiB/2670msec) 00:22:12.148 slat (usec): min=9, max=170, avg=19.02, stdev=16.22 00:22:12.148 clat (usec): min=351, max=42416, avg=17492.71, stdev=20174.84 00:22:12.148 lat (usec): min=361, max=42441, avg=17511.69, stdev=20179.27 00:22:12.148 clat percentiles (usec): 00:22:12.148 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 474], 00:22:12.148 | 30.00th=[ 498], 40.00th=[ 523], 50.00th=[ 553], 60.00th=[40633], 00:22:12.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:22:12.148 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:12.148 | 99.99th=[42206] 00:22:12.148 bw ( KiB/s): min= 96, max= 784, per=1.65%, avg=235.00, stdev=306.92, samples=5 00:22:12.148 iops : min= 24, max= 196, avg=58.60, stdev=76.81, samples=5 00:22:12.148 lat (usec) : 500=30.92%, 750=26.97% 00:22:12.148 lat (msec) : 50=41.45% 00:22:12.148 cpu : usr=0.00%, sys=0.19%, ctx=153, majf=0, minf=2 00:22:12.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.148 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.148 issued rwts: total=152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.148 00:22:12.148 Run status group 0 (all jobs): 00:22:12.148 READ: bw=13.9MiB/s (14.6MB/s), 226KiB/s-8853KiB/s (232kB/s-9065kB/s), io=46.6MiB (48.8MB), run=2670-3353msec 00:22:12.148 00:22:12.148 Disk stats (read/write): 00:22:12.148 nvme0n1: ios=6884/0, merge=0/0, ticks=3721/0, in_queue=3721, util=97.87% 00:22:12.148 nvme0n2: ios=1225/0, merge=0/0, ticks=3284/0, in_queue=3284, util=95.39% 00:22:12.148 nvme0n3: ios=3649/0, merge=0/0, ticks=2707/0, in_queue=2707, util=96.29% 00:22:12.148 nvme0n4: ios=149/0, merge=0/0, ticks=2559/0, in_queue=2559, util=96.41% 00:22:12.406 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:12.406 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:12.665 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:12.665 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:12.924 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:12.924 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:13.183 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:13.183 13:50:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1433963 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:13.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:13.442 nvmf hotplug test: fio failed as expected 00:22:13.442 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.701 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.701 rmmod nvme_tcp 00:22:13.960 rmmod nvme_fabrics 00:22:13.960 rmmod nvme_keyring 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1430869 ']' 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1430869 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1430869 ']' 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1430869 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1430869 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1430869' 00:22:13.960 killing process with pid 1430869 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1430869 00:22:13.960 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1430869 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.220 13:50:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.125 13:50:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.125 00:22:16.125 real 0m30.225s 00:22:16.125 user 2m22.603s 00:22:16.126 sys 0m10.782s 00:22:16.126 13:50:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:16.126 13:50:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.126 ************************************ 00:22:16.126 END TEST nvmf_fio_target 00:22:16.126 ************************************ 00:22:16.385 13:50:09 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:16.385 13:50:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:16.385 13:50:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:16.385 13:50:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.385 ************************************ 00:22:16.385 START TEST nvmf_bdevio 00:22:16.385 ************************************ 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:16.385 * Looking for test storage... 00:22:16.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.385 13:50:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:24.510 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:24.510 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:24.510 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:24.511 Found net devices under 0000:af:00.0: cvl_0_0 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:24.511 Found net devices under 0000:af:00.1: cvl_0_1 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.511 13:50:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:22:24.511 00:22:24.511 --- 10.0.0.2 ping statistics --- 00:22:24.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.511 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:24.511 00:22:24.511 --- 10.0.0.1 ping statistics --- 00:22:24.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.511 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1438879 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1438879 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1438879 ']' 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:24.511 13:50:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 [2024-06-11 13:50:16.342227] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:22:24.511 [2024-06-11 13:50:16.342287] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.511 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.511 [2024-06-11 13:50:16.451125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.511 [2024-06-11 13:50:16.534603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.511 [2024-06-11 13:50:16.534646] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.511 [2024-06-11 13:50:16.534659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.511 [2024-06-11 13:50:16.534671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.511 [2024-06-11 13:50:16.534681] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.511 [2024-06-11 13:50:16.534809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.511 [2024-06-11 13:50:16.534917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:22:24.511 [2024-06-11 13:50:16.535028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.511 [2024-06-11 13:50:16.535028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 [2024-06-11 13:50:17.296987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 Malloc0 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.511 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:24.512 [2024-06-11 13:50:17.344706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.512 { 00:22:24.512 "params": { 00:22:24.512 "name": "Nvme$subsystem", 00:22:24.512 "trtype": "$TEST_TRANSPORT", 00:22:24.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.512 "adrfam": "ipv4", 00:22:24.512 "trsvcid": "$NVMF_PORT", 00:22:24.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.512 "hdgst": ${hdgst:-false}, 00:22:24.512 "ddgst": ${ddgst:-false} 00:22:24.512 }, 00:22:24.512 "method": "bdev_nvme_attach_controller" 00:22:24.512 } 00:22:24.512 EOF 00:22:24.512 )") 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:24.512 13:50:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:24.512 "params": { 00:22:24.512 "name": "Nvme1", 00:22:24.512 "trtype": "tcp", 00:22:24.512 "traddr": "10.0.0.2", 00:22:24.512 "adrfam": "ipv4", 00:22:24.512 "trsvcid": "4420", 00:22:24.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.512 "hdgst": false, 00:22:24.512 "ddgst": false 00:22:24.512 }, 00:22:24.512 "method": "bdev_nvme_attach_controller" 00:22:24.512 }' 00:22:24.512 [2024-06-11 13:50:17.395179] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:22:24.512 [2024-06-11 13:50:17.395241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438932 ] 00:22:24.771 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.771 [2024-06-11 13:50:17.496700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:24.771 [2024-06-11 13:50:17.581188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.771 [2024-06-11 13:50:17.581283] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.771 [2024-06-11 13:50:17.581287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.031 I/O targets: 00:22:25.031 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:25.031 00:22:25.031 00:22:25.031 CUnit - A unit testing framework for C - Version 2.1-3 00:22:25.031 http://cunit.sourceforge.net/ 00:22:25.031 00:22:25.031 00:22:25.031 Suite: bdevio tests on: Nvme1n1 00:22:25.290 Test: blockdev write read block ...passed 00:22:25.290 Test: blockdev write zeroes read block ...passed 00:22:25.290 Test: blockdev write zeroes read no split ...passed 00:22:25.290 Test: blockdev write zeroes read split ...passed 00:22:25.290 Test: blockdev write zeroes read split partial ...passed 00:22:25.290 Test: blockdev reset ...[2024-06-11 13:50:18.121376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.290 [2024-06-11 13:50:18.121452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bb160 (9): Bad file descriptor 00:22:25.549 [2024-06-11 13:50:18.217546] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.549 passed 00:22:25.549 Test: blockdev write read 8 blocks ...passed 00:22:25.549 Test: blockdev write read size > 128k ...passed 00:22:25.549 Test: blockdev write read invalid size ...passed 00:22:25.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:25.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:25.549 Test: blockdev write read max offset ...passed 00:22:25.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:25.549 Test: blockdev writev readv 8 blocks ...passed 00:22:25.549 Test: blockdev writev readv 30 x 1block ...passed 00:22:25.549 Test: blockdev writev readv block ...passed 00:22:25.549 Test: blockdev writev readv size > 128k ...passed 00:22:25.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:25.549 Test: blockdev comparev and writev ...[2024-06-11 13:50:18.432805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.432835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.432851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.549 [2024-06-11 13:50:18.433947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.549 [2024-06-11 13:50:18.433956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.808 passed 00:22:25.808 Test: blockdev nvme passthru rw ...passed 00:22:25.808 Test: blockdev nvme passthru vendor specific ...[2024-06-11 13:50:18.515968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.808 [2024-06-11 13:50:18.515989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.808 [2024-06-11 13:50:18.516178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.808 [2024-06-11 13:50:18.516190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.808 [2024-06-11 13:50:18.516371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.808 [2024-06-11 13:50:18.516383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.808 [2024-06-11 13:50:18.516572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.808 [2024-06-11 13:50:18.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.808 passed 00:22:25.808 Test: blockdev nvme admin passthru ...passed 00:22:25.808 Test: blockdev copy ...passed 00:22:25.808 00:22:25.808 Run Summary: Type Total Ran Passed Failed Inactive 00:22:25.808 suites 1 1 n/a 0 0 00:22:25.808 tests 23 23 23 0 0 00:22:25.808 asserts 152 152 152 0 n/a 00:22:25.808 00:22:25.808 Elapsed time = 1.332 seconds 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.067 rmmod nvme_tcp 00:22:26.067 rmmod nvme_fabrics 00:22:26.067 rmmod nvme_keyring 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1438879 ']' 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1438879 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1438879 ']' 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1438879 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1438879 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:22:26.067 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1438879' 00:22:26.067 killing process with pid 1438879 00:22:26.068 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1438879 00:22:26.068 13:50:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1438879 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.327 13:50:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.865 13:50:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.865 00:22:28.865 real 0m12.061s 00:22:28.865 user 0m14.866s 00:22:28.865 sys 0m6.066s 00:22:28.865 13:50:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:28.865 13:50:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:28.865 ************************************ 00:22:28.865 END TEST nvmf_bdevio 00:22:28.865 ************************************ 00:22:28.865 13:50:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:28.865 13:50:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:28.865 13:50:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:28.865 13:50:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.865 ************************************ 00:22:28.865 START TEST nvmf_auth_target 00:22:28.865 ************************************ 00:22:28.865 13:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:28.865 * Looking for test storage... 00:22:28.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.866 13:50:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:35.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:35.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:35.479 Found net devices under 0000:af:00.0: cvl_0_0 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:35.479 Found net devices under 0000:af:00.1: cvl_0_1 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.479 13:50:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.479 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:22:35.480 00:22:35.480 --- 10.0.0.2 ping statistics --- 00:22:35.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.480 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:22:35.480 00:22:35.480 --- 10.0.0.1 ping statistics --- 00:22:35.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.480 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1442876 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1442876 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1442876 ']' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:35.480 13:50:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1443154 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2789373abadffde9ea380e5abf6988155ffb6842fd2a7940 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ceK 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2789373abadffde9ea380e5abf6988155ffb6842fd2a7940 0 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2789373abadffde9ea380e5abf6988155ffb6842fd2a7940 0 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2789373abadffde9ea380e5abf6988155ffb6842fd2a7940 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ceK 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ceK 00:22:36.419 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ceK 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa43bea7b6b3aeef0d00daad3d2ef2070dfc79bdaf1486ef2099e1cfa0782df4 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RGJ 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa43bea7b6b3aeef0d00daad3d2ef2070dfc79bdaf1486ef2099e1cfa0782df4 3 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa43bea7b6b3aeef0d00daad3d2ef2070dfc79bdaf1486ef2099e1cfa0782df4 3 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa43bea7b6b3aeef0d00daad3d2ef2070dfc79bdaf1486ef2099e1cfa0782df4 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RGJ 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RGJ 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.RGJ 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3ac3b568226e1161578bb58ae90d5a7 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.swE 00:22:36.679 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3ac3b568226e1161578bb58ae90d5a7 1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3ac3b568226e1161578bb58ae90d5a7 1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3ac3b568226e1161578bb58ae90d5a7 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.swE 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.swE 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.swE 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=882e0ac5560a8d150d75974451f7af4567e3b219a6de2f64 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YJ7 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 882e0ac5560a8d150d75974451f7af4567e3b219a6de2f64 2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 882e0ac5560a8d150d75974451f7af4567e3b219a6de2f64 2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=882e0ac5560a8d150d75974451f7af4567e3b219a6de2f64 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YJ7 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YJ7 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.YJ7 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=524dcf75e1dfd312679a450cc3b4eebdf36aeca72231c1d3 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0Qh 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 524dcf75e1dfd312679a450cc3b4eebdf36aeca72231c1d3 2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 524dcf75e1dfd312679a450cc3b4eebdf36aeca72231c1d3 2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=524dcf75e1dfd312679a450cc3b4eebdf36aeca72231c1d3 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:36.680 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0Qh 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0Qh 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0Qh 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a5478ef215eb2b469589e064968b1127 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fMf 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a5478ef215eb2b469589e064968b1127 1 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a5478ef215eb2b469589e064968b1127 1 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a5478ef215eb2b469589e064968b1127 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fMf 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fMf 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.fMf 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=29892f5196766180294664a528efd3e92394870ecbee716831bbcccba6a68c9e 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0Zo 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 29892f5196766180294664a528efd3e92394870ecbee716831bbcccba6a68c9e 3 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 29892f5196766180294664a528efd3e92394870ecbee716831bbcccba6a68c9e 3 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=29892f5196766180294664a528efd3e92394870ecbee716831bbcccba6a68c9e 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0Zo 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0Zo 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.0Zo 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1442876 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1442876 ']' 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:36.940 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1443154 /var/tmp/host.sock 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1443154 ']' 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:37.200 13:50:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ceK 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ceK 00:22:37.459 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ceK 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.RGJ ]] 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RGJ 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RGJ 00:22:37.718 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RGJ 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.swE 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.swE 00:22:37.977 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.swE 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.YJ7 ]] 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YJ7 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YJ7 00:22:38.235 13:50:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YJ7 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qh 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qh 00:22:38.494 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qh 00:22:38.752 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.fMf ]] 00:22:38.752 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fMf 00:22:38.752 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.752 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fMf 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fMf 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zo 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zo 00:22:38.753 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.0Zo 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:39.012 13:50:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.272 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.531 00:22:39.531 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.531 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.531 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.790 { 00:22:39.790 "cntlid": 1, 00:22:39.790 "qid": 0, 00:22:39.790 "state": "enabled", 00:22:39.790 "listen_address": { 00:22:39.790 "trtype": "TCP", 00:22:39.790 "adrfam": "IPv4", 00:22:39.790 "traddr": "10.0.0.2", 00:22:39.790 "trsvcid": "4420" 00:22:39.790 }, 00:22:39.790 "peer_address": { 00:22:39.790 "trtype": "TCP", 00:22:39.790 "adrfam": "IPv4", 00:22:39.790 "traddr": "10.0.0.1", 00:22:39.790 "trsvcid": "33786" 00:22:39.790 }, 00:22:39.790 "auth": { 00:22:39.790 "state": "completed", 00:22:39.790 "digest": "sha256", 00:22:39.790 "dhgroup": "null" 00:22:39.790 } 00:22:39.790 } 00:22:39.790 ]' 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:39.790 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.049 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.049 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.049 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.308 13:50:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:40.877 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.137 13:50:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.396 00:22:41.396 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.396 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.396 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:41.656 { 00:22:41.656 "cntlid": 3, 00:22:41.656 "qid": 0, 00:22:41.656 "state": "enabled", 00:22:41.656 "listen_address": { 00:22:41.656 "trtype": "TCP", 00:22:41.656 "adrfam": "IPv4", 00:22:41.656 "traddr": "10.0.0.2", 00:22:41.656 "trsvcid": "4420" 00:22:41.656 }, 00:22:41.656 "peer_address": { 00:22:41.656 "trtype": "TCP", 00:22:41.656 "adrfam": "IPv4", 00:22:41.656 "traddr": "10.0.0.1", 00:22:41.656 "trsvcid": "54226" 00:22:41.656 }, 00:22:41.656 "auth": { 00:22:41.656 "state": "completed", 00:22:41.656 "digest": "sha256", 00:22:41.656 "dhgroup": "null" 00:22:41.656 } 00:22:41.656 } 00:22:41.656 ]' 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:41.656 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.915 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.915 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.915 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.915 13:50:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:22:42.852 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.852 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:42.852 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.853 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.853 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.853 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.853 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:42.853 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.112 13:50:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:43.112 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.372 13:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.631 { 00:22:43.631 "cntlid": 5, 00:22:43.631 "qid": 0, 00:22:43.631 "state": "enabled", 00:22:43.631 "listen_address": { 00:22:43.631 "trtype": "TCP", 00:22:43.631 "adrfam": "IPv4", 00:22:43.631 "traddr": "10.0.0.2", 00:22:43.631 "trsvcid": "4420" 00:22:43.631 }, 00:22:43.631 "peer_address": { 00:22:43.631 "trtype": "TCP", 00:22:43.631 "adrfam": "IPv4", 00:22:43.631 "traddr": "10.0.0.1", 00:22:43.631 "trsvcid": "54262" 00:22:43.631 }, 00:22:43.631 "auth": { 00:22:43.631 "state": "completed", 00:22:43.631 "digest": "sha256", 00:22:43.631 "dhgroup": "null" 00:22:43.631 } 00:22:43.631 } 00:22:43.631 ]' 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.631 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.891 13:50:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:44.828 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:45.088 00:22:45.088 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.088 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.088 13:50:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.347 { 00:22:45.347 "cntlid": 7, 00:22:45.347 "qid": 0, 00:22:45.347 "state": "enabled", 00:22:45.347 "listen_address": { 00:22:45.347 "trtype": "TCP", 00:22:45.347 "adrfam": "IPv4", 00:22:45.347 "traddr": "10.0.0.2", 00:22:45.347 "trsvcid": "4420" 00:22:45.347 }, 00:22:45.347 "peer_address": { 00:22:45.347 "trtype": "TCP", 00:22:45.347 "adrfam": "IPv4", 00:22:45.347 "traddr": "10.0.0.1", 00:22:45.347 "trsvcid": "54274" 00:22:45.347 }, 00:22:45.347 "auth": { 00:22:45.347 "state": "completed", 00:22:45.347 "digest": "sha256", 00:22:45.347 "dhgroup": "null" 00:22:45.347 } 00:22:45.347 } 00:22:45.347 ]' 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:45.347 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.607 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:45.607 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.607 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.607 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.607 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.865 13:50:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.435 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.694 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.952 00:22:46.952 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.952 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.952 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.210 { 00:22:47.210 "cntlid": 9, 00:22:47.210 "qid": 0, 00:22:47.210 "state": "enabled", 00:22:47.210 "listen_address": { 00:22:47.210 "trtype": "TCP", 00:22:47.210 "adrfam": "IPv4", 00:22:47.210 "traddr": "10.0.0.2", 00:22:47.210 "trsvcid": "4420" 00:22:47.210 }, 00:22:47.210 "peer_address": { 00:22:47.210 "trtype": "TCP", 00:22:47.210 "adrfam": "IPv4", 00:22:47.210 "traddr": "10.0.0.1", 00:22:47.210 "trsvcid": "54292" 00:22:47.210 }, 00:22:47.210 "auth": { 00:22:47.210 "state": "completed", 00:22:47.210 "digest": "sha256", 00:22:47.210 "dhgroup": "ffdhe2048" 00:22:47.210 } 00:22:47.210 } 00:22:47.210 ]' 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:47.210 13:50:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.210 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.210 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.210 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.468 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:22:48.036 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.295 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:48.295 13:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.295 13:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.295 13:50:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.295 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.296 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:48.296 13:50:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.555 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.814 00:22:48.814 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.814 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.814 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.073 { 00:22:49.073 "cntlid": 11, 00:22:49.073 "qid": 0, 00:22:49.073 "state": "enabled", 00:22:49.073 "listen_address": { 00:22:49.073 "trtype": "TCP", 00:22:49.073 "adrfam": "IPv4", 00:22:49.073 "traddr": "10.0.0.2", 00:22:49.073 "trsvcid": "4420" 00:22:49.073 }, 00:22:49.073 "peer_address": { 00:22:49.073 "trtype": "TCP", 00:22:49.073 "adrfam": "IPv4", 00:22:49.073 "traddr": "10.0.0.1", 00:22:49.073 "trsvcid": "54318" 00:22:49.073 }, 00:22:49.073 "auth": { 00:22:49.073 "state": "completed", 00:22:49.073 "digest": "sha256", 00:22:49.073 "dhgroup": "ffdhe2048" 00:22:49.073 } 00:22:49.073 } 00:22:49.073 ]' 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.073 13:50:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.333 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:49.901 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.160 13:50:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.421 00:22:50.421 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.421 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.421 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.721 { 00:22:50.721 "cntlid": 13, 00:22:50.721 "qid": 0, 00:22:50.721 "state": "enabled", 00:22:50.721 "listen_address": { 00:22:50.721 "trtype": "TCP", 00:22:50.721 "adrfam": "IPv4", 00:22:50.721 "traddr": "10.0.0.2", 00:22:50.721 "trsvcid": "4420" 00:22:50.721 }, 00:22:50.721 "peer_address": { 00:22:50.721 "trtype": "TCP", 00:22:50.721 "adrfam": "IPv4", 00:22:50.721 "traddr": "10.0.0.1", 00:22:50.721 "trsvcid": "54356" 00:22:50.721 }, 00:22:50.721 "auth": { 00:22:50.721 "state": "completed", 00:22:50.721 "digest": "sha256", 00:22:50.721 "dhgroup": "ffdhe2048" 00:22:50.721 } 00:22:50.721 } 00:22:50.721 ]' 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:50.721 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.980 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.980 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.980 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.240 13:50:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:51.808 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.067 13:50:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.326 00:22:52.326 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.326 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.326 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.585 { 00:22:52.585 "cntlid": 15, 00:22:52.585 "qid": 0, 00:22:52.585 "state": "enabled", 00:22:52.585 "listen_address": { 00:22:52.585 "trtype": "TCP", 00:22:52.585 "adrfam": "IPv4", 00:22:52.585 "traddr": "10.0.0.2", 00:22:52.585 "trsvcid": "4420" 00:22:52.585 }, 00:22:52.585 "peer_address": { 00:22:52.585 "trtype": "TCP", 00:22:52.585 "adrfam": "IPv4", 00:22:52.585 "traddr": "10.0.0.1", 00:22:52.585 "trsvcid": "36254" 00:22:52.585 }, 00:22:52.585 "auth": { 00:22:52.585 "state": "completed", 00:22:52.585 "digest": "sha256", 00:22:52.585 "dhgroup": "ffdhe2048" 00:22:52.585 } 00:22:52.585 } 00:22:52.585 ]' 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.585 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.845 13:50:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.782 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.350 00:22:54.350 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.350 13:50:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:54.350 { 00:22:54.350 "cntlid": 17, 00:22:54.350 "qid": 0, 00:22:54.350 "state": "enabled", 00:22:54.350 "listen_address": { 00:22:54.350 "trtype": "TCP", 00:22:54.350 "adrfam": "IPv4", 00:22:54.350 "traddr": "10.0.0.2", 00:22:54.350 "trsvcid": "4420" 00:22:54.350 }, 00:22:54.350 "peer_address": { 00:22:54.350 "trtype": "TCP", 00:22:54.350 "adrfam": "IPv4", 00:22:54.350 "traddr": "10.0.0.1", 00:22:54.350 "trsvcid": "36292" 00:22:54.350 }, 00:22:54.350 "auth": { 00:22:54.350 "state": "completed", 00:22:54.350 "digest": "sha256", 00:22:54.350 "dhgroup": "ffdhe3072" 00:22:54.350 } 00:22:54.350 } 00:22:54.350 ]' 00:22:54.350 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.610 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.868 13:50:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.806 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.807 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.066 00:22:56.066 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.066 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.066 13:50:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.325 { 00:22:56.325 "cntlid": 19, 00:22:56.325 "qid": 0, 00:22:56.325 "state": "enabled", 00:22:56.325 "listen_address": { 00:22:56.325 "trtype": "TCP", 00:22:56.325 "adrfam": "IPv4", 00:22:56.325 "traddr": "10.0.0.2", 00:22:56.325 "trsvcid": "4420" 00:22:56.325 }, 00:22:56.325 "peer_address": { 00:22:56.325 "trtype": "TCP", 00:22:56.325 "adrfam": "IPv4", 00:22:56.325 "traddr": "10.0.0.1", 00:22:56.325 "trsvcid": "36320" 00:22:56.325 }, 00:22:56.325 "auth": { 00:22:56.325 "state": "completed", 00:22:56.325 "digest": "sha256", 00:22:56.325 "dhgroup": "ffdhe3072" 00:22:56.325 } 00:22:56.325 } 00:22:56.325 ]' 00:22:56.325 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.585 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.844 13:50:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:57.413 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:57.672 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.673 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.932 00:22:58.191 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.191 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.191 13:50:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.191 { 00:22:58.191 "cntlid": 21, 00:22:58.191 "qid": 0, 00:22:58.191 "state": "enabled", 00:22:58.191 "listen_address": { 00:22:58.191 "trtype": "TCP", 00:22:58.191 "adrfam": "IPv4", 00:22:58.191 "traddr": "10.0.0.2", 00:22:58.191 "trsvcid": "4420" 00:22:58.191 }, 00:22:58.191 "peer_address": { 00:22:58.191 "trtype": "TCP", 00:22:58.191 "adrfam": "IPv4", 00:22:58.191 "traddr": "10.0.0.1", 00:22:58.191 "trsvcid": "36354" 00:22:58.191 }, 00:22:58.191 "auth": { 00:22:58.191 "state": "completed", 00:22:58.191 "digest": "sha256", 00:22:58.191 "dhgroup": "ffdhe3072" 00:22:58.191 } 00:22:58.191 } 00:22:58.191 ]' 00:22:58.191 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.451 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.710 13:50:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:22:59.280 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.280 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:59.280 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.280 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.539 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.799 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.058 13:50:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.317 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.317 { 00:23:00.317 "cntlid": 23, 00:23:00.317 "qid": 0, 00:23:00.317 "state": "enabled", 00:23:00.317 "listen_address": { 00:23:00.317 "trtype": "TCP", 00:23:00.317 "adrfam": "IPv4", 00:23:00.317 "traddr": "10.0.0.2", 00:23:00.317 "trsvcid": "4420" 00:23:00.317 }, 00:23:00.317 "peer_address": { 00:23:00.317 "trtype": "TCP", 00:23:00.317 "adrfam": "IPv4", 00:23:00.317 "traddr": "10.0.0.1", 00:23:00.317 "trsvcid": "36386" 00:23:00.317 }, 00:23:00.317 "auth": { 00:23:00.317 "state": "completed", 00:23:00.317 "digest": "sha256", 00:23:00.317 "dhgroup": "ffdhe3072" 00:23:00.317 } 00:23:00.317 } 00:23:00.317 ]' 00:23:00.317 13:50:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.317 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.577 13:50:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:01.146 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.405 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.974 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.974 { 00:23:01.974 "cntlid": 25, 00:23:01.974 "qid": 0, 00:23:01.974 "state": "enabled", 00:23:01.974 "listen_address": { 00:23:01.974 "trtype": "TCP", 00:23:01.974 "adrfam": "IPv4", 00:23:01.974 "traddr": "10.0.0.2", 00:23:01.974 "trsvcid": "4420" 00:23:01.974 }, 00:23:01.974 "peer_address": { 00:23:01.974 "trtype": "TCP", 00:23:01.974 "adrfam": "IPv4", 00:23:01.974 "traddr": "10.0.0.1", 00:23:01.974 "trsvcid": "41158" 00:23:01.974 }, 00:23:01.974 "auth": { 00:23:01.974 "state": "completed", 00:23:01.974 "digest": "sha256", 00:23:01.974 "dhgroup": "ffdhe4096" 00:23:01.974 } 00:23:01.974 } 00:23:01.974 ]' 00:23:01.974 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.234 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.234 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.234 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:02.234 13:50:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.234 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.234 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.234 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.493 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.062 13:50:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.322 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.581 00:23:03.581 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.581 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.581 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.841 { 00:23:03.841 "cntlid": 27, 00:23:03.841 "qid": 0, 00:23:03.841 "state": "enabled", 00:23:03.841 "listen_address": { 00:23:03.841 "trtype": "TCP", 00:23:03.841 "adrfam": "IPv4", 00:23:03.841 "traddr": "10.0.0.2", 00:23:03.841 "trsvcid": "4420" 00:23:03.841 }, 00:23:03.841 "peer_address": { 00:23:03.841 "trtype": "TCP", 00:23:03.841 "adrfam": "IPv4", 00:23:03.841 "traddr": "10.0.0.1", 00:23:03.841 "trsvcid": "41186" 00:23:03.841 }, 00:23:03.841 "auth": { 00:23:03.841 "state": "completed", 00:23:03.841 "digest": "sha256", 00:23:03.841 "dhgroup": "ffdhe4096" 00:23:03.841 } 00:23:03.841 } 00:23:03.841 ]' 00:23:03.841 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.100 13:50:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.359 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:04.927 13:50:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.187 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.446 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:05.706 { 00:23:05.706 "cntlid": 29, 00:23:05.706 "qid": 0, 00:23:05.706 "state": "enabled", 00:23:05.706 "listen_address": { 00:23:05.706 "trtype": "TCP", 00:23:05.706 "adrfam": "IPv4", 00:23:05.706 "traddr": "10.0.0.2", 00:23:05.706 "trsvcid": "4420" 00:23:05.706 }, 00:23:05.706 "peer_address": { 00:23:05.706 "trtype": "TCP", 00:23:05.706 "adrfam": "IPv4", 00:23:05.706 "traddr": "10.0.0.1", 00:23:05.706 "trsvcid": "41208" 00:23:05.706 }, 00:23:05.706 "auth": { 00:23:05.706 "state": "completed", 00:23:05.706 "digest": "sha256", 00:23:05.706 "dhgroup": "ffdhe4096" 00:23:05.706 } 00:23:05.706 } 00:23:05.706 ]' 00:23:05.706 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.965 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.225 13:50:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.793 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:07.052 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.053 13:50:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.312 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.626 { 00:23:07.626 "cntlid": 31, 00:23:07.626 "qid": 0, 00:23:07.626 "state": "enabled", 00:23:07.626 "listen_address": { 00:23:07.626 "trtype": "TCP", 00:23:07.626 "adrfam": "IPv4", 00:23:07.626 "traddr": "10.0.0.2", 00:23:07.626 "trsvcid": "4420" 00:23:07.626 }, 00:23:07.626 "peer_address": { 00:23:07.626 "trtype": "TCP", 00:23:07.626 "adrfam": "IPv4", 00:23:07.626 "traddr": "10.0.0.1", 00:23:07.626 "trsvcid": "41236" 00:23:07.626 }, 00:23:07.626 "auth": { 00:23:07.626 "state": "completed", 00:23:07.626 "digest": "sha256", 00:23:07.626 "dhgroup": "ffdhe4096" 00:23:07.626 } 00:23:07.626 } 00:23:07.626 ]' 00:23:07.626 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.896 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.155 13:51:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.723 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.724 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.724 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.983 13:51:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.550 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.550 13:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.809 { 00:23:09.809 "cntlid": 33, 00:23:09.809 "qid": 0, 00:23:09.809 "state": "enabled", 00:23:09.809 "listen_address": { 00:23:09.809 "trtype": "TCP", 00:23:09.809 "adrfam": "IPv4", 00:23:09.809 "traddr": "10.0.0.2", 00:23:09.809 "trsvcid": "4420" 00:23:09.809 }, 00:23:09.809 "peer_address": { 00:23:09.809 "trtype": "TCP", 00:23:09.809 "adrfam": "IPv4", 00:23:09.809 "traddr": "10.0.0.1", 00:23:09.809 "trsvcid": "41268" 00:23:09.809 }, 00:23:09.809 "auth": { 00:23:09.809 "state": "completed", 00:23:09.809 "digest": "sha256", 00:23:09.809 "dhgroup": "ffdhe6144" 00:23:09.809 } 00:23:09.809 } 00:23:09.809 ]' 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.809 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.068 13:51:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.007 13:51:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.576 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.576 { 00:23:11.576 "cntlid": 35, 00:23:11.576 "qid": 0, 00:23:11.576 "state": "enabled", 00:23:11.576 "listen_address": { 00:23:11.576 "trtype": "TCP", 00:23:11.576 "adrfam": "IPv4", 00:23:11.576 "traddr": "10.0.0.2", 00:23:11.576 "trsvcid": "4420" 00:23:11.576 }, 00:23:11.576 "peer_address": { 00:23:11.576 "trtype": "TCP", 00:23:11.576 "adrfam": "IPv4", 00:23:11.576 "traddr": "10.0.0.1", 00:23:11.576 "trsvcid": "39370" 00:23:11.576 }, 00:23:11.576 "auth": { 00:23:11.576 "state": "completed", 00:23:11.576 "digest": "sha256", 00:23:11.576 "dhgroup": "ffdhe6144" 00:23:11.576 } 00:23:11.576 } 00:23:11.576 ]' 00:23:11.576 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.835 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.095 13:51:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.665 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.924 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.925 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.925 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.925 13:51:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.925 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.925 13:51:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.493 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.493 { 00:23:13.493 "cntlid": 37, 00:23:13.493 "qid": 0, 00:23:13.493 "state": "enabled", 00:23:13.493 "listen_address": { 00:23:13.493 "trtype": "TCP", 00:23:13.493 "adrfam": "IPv4", 00:23:13.493 "traddr": "10.0.0.2", 00:23:13.493 "trsvcid": "4420" 00:23:13.493 }, 00:23:13.493 "peer_address": { 00:23:13.493 "trtype": "TCP", 00:23:13.493 "adrfam": "IPv4", 00:23:13.493 "traddr": "10.0.0.1", 00:23:13.493 "trsvcid": "39398" 00:23:13.493 }, 00:23:13.493 "auth": { 00:23:13.493 "state": "completed", 00:23:13.493 "digest": "sha256", 00:23:13.493 "dhgroup": "ffdhe6144" 00:23:13.493 } 00:23:13.493 } 00:23:13.493 ]' 00:23:13.493 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.753 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.012 13:51:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.580 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.839 13:51:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:15.408 00:23:15.408 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.408 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.408 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.667 { 00:23:15.667 "cntlid": 39, 00:23:15.667 "qid": 0, 00:23:15.667 "state": "enabled", 00:23:15.667 "listen_address": { 00:23:15.667 "trtype": "TCP", 00:23:15.667 "adrfam": "IPv4", 00:23:15.667 "traddr": "10.0.0.2", 00:23:15.667 "trsvcid": "4420" 00:23:15.667 }, 00:23:15.667 "peer_address": { 00:23:15.667 "trtype": "TCP", 00:23:15.667 "adrfam": "IPv4", 00:23:15.667 "traddr": "10.0.0.1", 00:23:15.667 "trsvcid": "39434" 00:23:15.667 }, 00:23:15.667 "auth": { 00:23:15.667 "state": "completed", 00:23:15.667 "digest": "sha256", 00:23:15.667 "dhgroup": "ffdhe6144" 00:23:15.667 } 00:23:15.667 } 00:23:15.667 ]' 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.667 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.926 13:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.494 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.756 13:51:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.695 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.695 { 00:23:17.695 "cntlid": 41, 00:23:17.695 "qid": 0, 00:23:17.695 "state": "enabled", 00:23:17.695 "listen_address": { 00:23:17.695 "trtype": "TCP", 00:23:17.695 "adrfam": "IPv4", 00:23:17.695 "traddr": "10.0.0.2", 00:23:17.695 "trsvcid": "4420" 00:23:17.695 }, 00:23:17.695 "peer_address": { 00:23:17.695 "trtype": "TCP", 00:23:17.695 "adrfam": "IPv4", 00:23:17.695 "traddr": "10.0.0.1", 00:23:17.695 "trsvcid": "39480" 00:23:17.695 }, 00:23:17.695 "auth": { 00:23:17.695 "state": "completed", 00:23:17.695 "digest": "sha256", 00:23:17.695 "dhgroup": "ffdhe8192" 00:23:17.695 } 00:23:17.695 } 00:23:17.695 ]' 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:17.695 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.954 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.954 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.954 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.214 13:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.782 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.041 13:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.610 00:23:19.610 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.610 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.610 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.868 { 00:23:19.868 "cntlid": 43, 00:23:19.868 "qid": 0, 00:23:19.868 "state": "enabled", 00:23:19.868 "listen_address": { 00:23:19.868 "trtype": "TCP", 00:23:19.868 "adrfam": "IPv4", 00:23:19.868 "traddr": "10.0.0.2", 00:23:19.868 "trsvcid": "4420" 00:23:19.868 }, 00:23:19.868 "peer_address": { 00:23:19.868 "trtype": "TCP", 00:23:19.868 "adrfam": "IPv4", 00:23:19.868 "traddr": "10.0.0.1", 00:23:19.868 "trsvcid": "39500" 00:23:19.868 }, 00:23:19.868 "auth": { 00:23:19.868 "state": "completed", 00:23:19.868 "digest": "sha256", 00:23:19.868 "dhgroup": "ffdhe8192" 00:23:19.868 } 00:23:19.868 } 00:23:19.868 ]' 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:19.868 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.126 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:20.126 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.126 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.126 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.127 13:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.385 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:20.952 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.211 13:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.779 00:23:21.779 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:21.779 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:21.779 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.039 { 00:23:22.039 "cntlid": 45, 00:23:22.039 "qid": 0, 00:23:22.039 "state": "enabled", 00:23:22.039 "listen_address": { 00:23:22.039 "trtype": "TCP", 00:23:22.039 "adrfam": "IPv4", 00:23:22.039 "traddr": "10.0.0.2", 00:23:22.039 "trsvcid": "4420" 00:23:22.039 }, 00:23:22.039 "peer_address": { 00:23:22.039 "trtype": "TCP", 00:23:22.039 "adrfam": "IPv4", 00:23:22.039 "traddr": "10.0.0.1", 00:23:22.039 "trsvcid": "56534" 00:23:22.039 }, 00:23:22.039 "auth": { 00:23:22.039 "state": "completed", 00:23:22.039 "digest": "sha256", 00:23:22.039 "dhgroup": "ffdhe8192" 00:23:22.039 } 00:23:22.039 } 00:23:22.039 ]' 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.039 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:22.298 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.298 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.298 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.298 13:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.557 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.126 13:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.385 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:23.386 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:23.952 00:23:23.952 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:23.952 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.952 13:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.210 { 00:23:24.210 "cntlid": 47, 00:23:24.210 "qid": 0, 00:23:24.210 "state": "enabled", 00:23:24.210 "listen_address": { 00:23:24.210 "trtype": "TCP", 00:23:24.210 "adrfam": "IPv4", 00:23:24.210 "traddr": "10.0.0.2", 00:23:24.210 "trsvcid": "4420" 00:23:24.210 }, 00:23:24.210 "peer_address": { 00:23:24.210 "trtype": "TCP", 00:23:24.210 "adrfam": "IPv4", 00:23:24.210 "traddr": "10.0.0.1", 00:23:24.210 "trsvcid": "56564" 00:23:24.210 }, 00:23:24.210 "auth": { 00:23:24.210 "state": "completed", 00:23:24.210 "digest": "sha256", 00:23:24.210 "dhgroup": "ffdhe8192" 00:23:24.210 } 00:23:24.210 } 00:23:24.210 ]' 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:24.210 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:24.469 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.469 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:24.469 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.469 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.469 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.733 13:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:25.365 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:25.366 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.366 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.366 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:25.366 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.625 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.884 00:23:25.884 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.884 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.884 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.143 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.143 { 00:23:26.143 "cntlid": 49, 00:23:26.143 "qid": 0, 00:23:26.143 "state": "enabled", 00:23:26.143 "listen_address": { 00:23:26.143 "trtype": "TCP", 00:23:26.143 "adrfam": "IPv4", 00:23:26.143 "traddr": "10.0.0.2", 00:23:26.143 "trsvcid": "4420" 00:23:26.143 }, 00:23:26.143 "peer_address": { 00:23:26.143 "trtype": "TCP", 00:23:26.143 "adrfam": "IPv4", 00:23:26.143 "traddr": "10.0.0.1", 00:23:26.144 "trsvcid": "56582" 00:23:26.144 }, 00:23:26.144 "auth": { 00:23:26.144 "state": "completed", 00:23:26.144 "digest": "sha384", 00:23:26.144 "dhgroup": "null" 00:23:26.144 } 00:23:26.144 } 00:23:26.144 ]' 00:23:26.144 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.144 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.144 13:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.144 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:26.144 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.401 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.401 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.401 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.401 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:27.338 13:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.338 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.597 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:27.856 { 00:23:27.856 "cntlid": 51, 00:23:27.856 "qid": 0, 00:23:27.856 "state": "enabled", 00:23:27.856 "listen_address": { 00:23:27.856 "trtype": "TCP", 00:23:27.856 "adrfam": "IPv4", 00:23:27.856 "traddr": "10.0.0.2", 00:23:27.856 "trsvcid": "4420" 00:23:27.856 }, 00:23:27.856 "peer_address": { 00:23:27.856 "trtype": "TCP", 00:23:27.856 "adrfam": "IPv4", 00:23:27.856 "traddr": "10.0.0.1", 00:23:27.856 "trsvcid": "56596" 00:23:27.856 }, 00:23:27.856 "auth": { 00:23:27.856 "state": "completed", 00:23:27.856 "digest": "sha384", 00:23:27.856 "dhgroup": "null" 00:23:27.856 } 00:23:27.856 } 00:23:27.856 ]' 00:23:27.856 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.114 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.114 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.115 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:28.115 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.115 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.115 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.115 13:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.373 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:28.940 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.940 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:28.940 13:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.940 13:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.199 13:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.199 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.199 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:29.199 13:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.199 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.457 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:29.715 { 00:23:29.715 "cntlid": 53, 00:23:29.715 "qid": 0, 00:23:29.715 "state": "enabled", 00:23:29.715 "listen_address": { 00:23:29.715 "trtype": "TCP", 00:23:29.715 "adrfam": "IPv4", 00:23:29.715 "traddr": "10.0.0.2", 00:23:29.715 "trsvcid": "4420" 00:23:29.715 }, 00:23:29.715 "peer_address": { 00:23:29.715 "trtype": "TCP", 00:23:29.715 "adrfam": "IPv4", 00:23:29.715 "traddr": "10.0.0.1", 00:23:29.715 "trsvcid": "56628" 00:23:29.715 }, 00:23:29.715 "auth": { 00:23:29.715 "state": "completed", 00:23:29.715 "digest": "sha384", 00:23:29.715 "dhgroup": "null" 00:23:29.715 } 00:23:29.715 } 00:23:29.715 ]' 00:23:29.715 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.973 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.232 13:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:30.800 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:31.060 13:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:31.320 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.579 13:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:31.838 { 00:23:31.838 "cntlid": 55, 00:23:31.838 "qid": 0, 00:23:31.838 "state": "enabled", 00:23:31.838 "listen_address": { 00:23:31.838 "trtype": "TCP", 00:23:31.838 "adrfam": "IPv4", 00:23:31.838 "traddr": "10.0.0.2", 00:23:31.838 "trsvcid": "4420" 00:23:31.838 }, 00:23:31.838 "peer_address": { 00:23:31.838 "trtype": "TCP", 00:23:31.838 "adrfam": "IPv4", 00:23:31.838 "traddr": "10.0.0.1", 00:23:31.838 "trsvcid": "42422" 00:23:31.838 }, 00:23:31.838 "auth": { 00:23:31.838 "state": "completed", 00:23:31.838 "digest": "sha384", 00:23:31.838 "dhgroup": "null" 00:23:31.838 } 00:23:31.838 } 00:23:31.838 ]' 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.838 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.096 13:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:32.664 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:32.922 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.181 13:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.181 00:23:33.181 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:33.181 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:33.181 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:33.441 { 00:23:33.441 "cntlid": 57, 00:23:33.441 "qid": 0, 00:23:33.441 "state": "enabled", 00:23:33.441 "listen_address": { 00:23:33.441 "trtype": "TCP", 00:23:33.441 "adrfam": "IPv4", 00:23:33.441 "traddr": "10.0.0.2", 00:23:33.441 "trsvcid": "4420" 00:23:33.441 }, 00:23:33.441 "peer_address": { 00:23:33.441 "trtype": "TCP", 00:23:33.441 "adrfam": "IPv4", 00:23:33.441 "traddr": "10.0.0.1", 00:23:33.441 "trsvcid": "42444" 00:23:33.441 }, 00:23:33.441 "auth": { 00:23:33.441 "state": "completed", 00:23:33.441 "digest": "sha384", 00:23:33.441 "dhgroup": "ffdhe2048" 00:23:33.441 } 00:23:33.441 } 00:23:33.441 ]' 00:23:33.441 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.701 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.959 13:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.527 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.787 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.046 00:23:35.046 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.046 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.046 13:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:35.306 { 00:23:35.306 "cntlid": 59, 00:23:35.306 "qid": 0, 00:23:35.306 "state": "enabled", 00:23:35.306 "listen_address": { 00:23:35.306 "trtype": "TCP", 00:23:35.306 "adrfam": "IPv4", 00:23:35.306 "traddr": "10.0.0.2", 00:23:35.306 "trsvcid": "4420" 00:23:35.306 }, 00:23:35.306 "peer_address": { 00:23:35.306 "trtype": "TCP", 00:23:35.306 "adrfam": "IPv4", 00:23:35.306 "traddr": "10.0.0.1", 00:23:35.306 "trsvcid": "42474" 00:23:35.306 }, 00:23:35.306 "auth": { 00:23:35.306 "state": "completed", 00:23:35.306 "digest": "sha384", 00:23:35.306 "dhgroup": "ffdhe2048" 00:23:35.306 } 00:23:35.306 } 00:23:35.306 ]' 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:35.306 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:35.566 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:35.566 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:35.566 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.566 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.566 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.825 13:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:36.394 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.653 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.912 00:23:36.912 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:36.912 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:36.912 13:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.170 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:37.171 { 00:23:37.171 "cntlid": 61, 00:23:37.171 "qid": 0, 00:23:37.171 "state": "enabled", 00:23:37.171 "listen_address": { 00:23:37.171 "trtype": "TCP", 00:23:37.171 "adrfam": "IPv4", 00:23:37.171 "traddr": "10.0.0.2", 00:23:37.171 "trsvcid": "4420" 00:23:37.171 }, 00:23:37.171 "peer_address": { 00:23:37.171 "trtype": "TCP", 00:23:37.171 "adrfam": "IPv4", 00:23:37.171 "traddr": "10.0.0.1", 00:23:37.171 "trsvcid": "42496" 00:23:37.171 }, 00:23:37.171 "auth": { 00:23:37.171 "state": "completed", 00:23:37.171 "digest": "sha384", 00:23:37.171 "dhgroup": "ffdhe2048" 00:23:37.171 } 00:23:37.171 } 00:23:37.171 ]' 00:23:37.171 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.430 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.689 13:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:38.258 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.258 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:38.258 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.258 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:38.518 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:39.087 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:39.087 { 00:23:39.087 "cntlid": 63, 00:23:39.087 "qid": 0, 00:23:39.087 "state": "enabled", 00:23:39.087 "listen_address": { 00:23:39.087 "trtype": "TCP", 00:23:39.087 "adrfam": "IPv4", 00:23:39.087 "traddr": "10.0.0.2", 00:23:39.087 "trsvcid": "4420" 00:23:39.087 }, 00:23:39.087 "peer_address": { 00:23:39.087 "trtype": "TCP", 00:23:39.087 "adrfam": "IPv4", 00:23:39.087 "traddr": "10.0.0.1", 00:23:39.087 "trsvcid": "42518" 00:23:39.087 }, 00:23:39.087 "auth": { 00:23:39.087 "state": "completed", 00:23:39.087 "digest": "sha384", 00:23:39.087 "dhgroup": "ffdhe2048" 00:23:39.087 } 00:23:39.087 } 00:23:39.087 ]' 00:23:39.087 13:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.353 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.612 13:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:40.179 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.438 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.698 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.958 13:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:41.218 { 00:23:41.218 "cntlid": 65, 00:23:41.218 "qid": 0, 00:23:41.218 "state": "enabled", 00:23:41.218 "listen_address": { 00:23:41.218 "trtype": "TCP", 00:23:41.218 "adrfam": "IPv4", 00:23:41.218 "traddr": "10.0.0.2", 00:23:41.218 "trsvcid": "4420" 00:23:41.218 }, 00:23:41.218 "peer_address": { 00:23:41.218 "trtype": "TCP", 00:23:41.218 "adrfam": "IPv4", 00:23:41.218 "traddr": "10.0.0.1", 00:23:41.218 "trsvcid": "37374" 00:23:41.218 }, 00:23:41.218 "auth": { 00:23:41.218 "state": "completed", 00:23:41.218 "digest": "sha384", 00:23:41.218 "dhgroup": "ffdhe3072" 00:23:41.218 } 00:23:41.218 } 00:23:41.218 ]' 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:41.218 13:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:41.218 13:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.218 13:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.218 13:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.547 13:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:42.115 13:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.115 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:42.115 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.115 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.115 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.115 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:42.116 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:42.116 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.375 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.635 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.894 { 00:23:42.894 "cntlid": 67, 00:23:42.894 "qid": 0, 00:23:42.894 "state": "enabled", 00:23:42.894 "listen_address": { 00:23:42.894 "trtype": "TCP", 00:23:42.894 "adrfam": "IPv4", 00:23:42.894 "traddr": "10.0.0.2", 00:23:42.894 "trsvcid": "4420" 00:23:42.894 }, 00:23:42.894 "peer_address": { 00:23:42.894 "trtype": "TCP", 00:23:42.894 "adrfam": "IPv4", 00:23:42.894 "traddr": "10.0.0.1", 00:23:42.894 "trsvcid": "37400" 00:23:42.894 }, 00:23:42.894 "auth": { 00:23:42.894 "state": "completed", 00:23:42.894 "digest": "sha384", 00:23:42.894 "dhgroup": "ffdhe3072" 00:23:42.894 } 00:23:42.894 } 00:23:42.894 ]' 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:42.894 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:43.153 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:43.153 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:43.153 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.153 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.153 13:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.413 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.982 13:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.241 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.500 00:23:44.500 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.500 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.500 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.760 { 00:23:44.760 "cntlid": 69, 00:23:44.760 "qid": 0, 00:23:44.760 "state": "enabled", 00:23:44.760 "listen_address": { 00:23:44.760 "trtype": "TCP", 00:23:44.760 "adrfam": "IPv4", 00:23:44.760 "traddr": "10.0.0.2", 00:23:44.760 "trsvcid": "4420" 00:23:44.760 }, 00:23:44.760 "peer_address": { 00:23:44.760 "trtype": "TCP", 00:23:44.760 "adrfam": "IPv4", 00:23:44.760 "traddr": "10.0.0.1", 00:23:44.760 "trsvcid": "37440" 00:23:44.760 }, 00:23:44.760 "auth": { 00:23:44.760 "state": "completed", 00:23:44.760 "digest": "sha384", 00:23:44.760 "dhgroup": "ffdhe3072" 00:23:44.760 } 00:23:44.760 } 00:23:44.760 ]' 00:23:44.760 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.020 13:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.279 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.848 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:46.108 13:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:46.367 00:23:46.367 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.367 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.367 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.626 { 00:23:46.626 "cntlid": 71, 00:23:46.626 "qid": 0, 00:23:46.626 "state": "enabled", 00:23:46.626 "listen_address": { 00:23:46.626 "trtype": "TCP", 00:23:46.626 "adrfam": "IPv4", 00:23:46.626 "traddr": "10.0.0.2", 00:23:46.626 "trsvcid": "4420" 00:23:46.626 }, 00:23:46.626 "peer_address": { 00:23:46.626 "trtype": "TCP", 00:23:46.626 "adrfam": "IPv4", 00:23:46.626 "traddr": "10.0.0.1", 00:23:46.626 "trsvcid": "37466" 00:23:46.626 }, 00:23:46.626 "auth": { 00:23:46.626 "state": "completed", 00:23:46.626 "digest": "sha384", 00:23:46.626 "dhgroup": "ffdhe3072" 00:23:46.626 } 00:23:46.626 } 00:23:46.626 ]' 00:23:46.626 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.885 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.145 13:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:47.715 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.974 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.975 13:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.234 00:23:48.234 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.234 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.234 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.493 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.493 { 00:23:48.493 "cntlid": 73, 00:23:48.493 "qid": 0, 00:23:48.493 "state": "enabled", 00:23:48.493 "listen_address": { 00:23:48.493 "trtype": "TCP", 00:23:48.494 "adrfam": "IPv4", 00:23:48.494 "traddr": "10.0.0.2", 00:23:48.494 "trsvcid": "4420" 00:23:48.494 }, 00:23:48.494 "peer_address": { 00:23:48.494 "trtype": "TCP", 00:23:48.494 "adrfam": "IPv4", 00:23:48.494 "traddr": "10.0.0.1", 00:23:48.494 "trsvcid": "37484" 00:23:48.494 }, 00:23:48.494 "auth": { 00:23:48.494 "state": "completed", 00:23:48.494 "digest": "sha384", 00:23:48.494 "dhgroup": "ffdhe4096" 00:23:48.494 } 00:23:48.494 } 00:23:48.494 ]' 00:23:48.494 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.494 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:48.494 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.753 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:48.753 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.753 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.753 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.753 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.013 13:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:49.581 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.841 13:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.100 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:50.360 { 00:23:50.360 "cntlid": 75, 00:23:50.360 "qid": 0, 00:23:50.360 "state": "enabled", 00:23:50.360 "listen_address": { 00:23:50.360 "trtype": "TCP", 00:23:50.360 "adrfam": "IPv4", 00:23:50.360 "traddr": "10.0.0.2", 00:23:50.360 "trsvcid": "4420" 00:23:50.360 }, 00:23:50.360 "peer_address": { 00:23:50.360 "trtype": "TCP", 00:23:50.360 "adrfam": "IPv4", 00:23:50.360 "traddr": "10.0.0.1", 00:23:50.360 "trsvcid": "37508" 00:23:50.360 }, 00:23:50.360 "auth": { 00:23:50.360 "state": "completed", 00:23:50.360 "digest": "sha384", 00:23:50.360 "dhgroup": "ffdhe4096" 00:23:50.360 } 00:23:50.360 } 00:23:50.360 ]' 00:23:50.360 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.618 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.877 13:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.445 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:51.704 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.705 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.965 00:23:52.225 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:52.225 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.225 13:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.225 { 00:23:52.225 "cntlid": 77, 00:23:52.225 "qid": 0, 00:23:52.225 "state": "enabled", 00:23:52.225 "listen_address": { 00:23:52.225 "trtype": "TCP", 00:23:52.225 "adrfam": "IPv4", 00:23:52.225 "traddr": "10.0.0.2", 00:23:52.225 "trsvcid": "4420" 00:23:52.225 }, 00:23:52.225 "peer_address": { 00:23:52.225 "trtype": "TCP", 00:23:52.225 "adrfam": "IPv4", 00:23:52.225 "traddr": "10.0.0.1", 00:23:52.225 "trsvcid": "54844" 00:23:52.225 }, 00:23:52.225 "auth": { 00:23:52.225 "state": "completed", 00:23:52.225 "digest": "sha384", 00:23:52.225 "dhgroup": "ffdhe4096" 00:23:52.225 } 00:23:52.225 } 00:23:52.225 ]' 00:23:52.225 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.485 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.744 13:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.312 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:53.572 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:53.832 00:23:53.832 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:53.832 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.832 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:54.091 { 00:23:54.091 "cntlid": 79, 00:23:54.091 "qid": 0, 00:23:54.091 "state": "enabled", 00:23:54.091 "listen_address": { 00:23:54.091 "trtype": "TCP", 00:23:54.091 "adrfam": "IPv4", 00:23:54.091 "traddr": "10.0.0.2", 00:23:54.091 "trsvcid": "4420" 00:23:54.091 }, 00:23:54.091 "peer_address": { 00:23:54.091 "trtype": "TCP", 00:23:54.091 "adrfam": "IPv4", 00:23:54.091 "traddr": "10.0.0.1", 00:23:54.091 "trsvcid": "54878" 00:23:54.091 }, 00:23:54.091 "auth": { 00:23:54.091 "state": "completed", 00:23:54.091 "digest": "sha384", 00:23:54.091 "dhgroup": "ffdhe4096" 00:23:54.091 } 00:23:54.091 } 00:23:54.091 ]' 00:23:54.091 13:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.350 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.609 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:23:55.178 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.178 13:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:55.178 13:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.178 13:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.178 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.178 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.178 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:55.178 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:55.178 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.437 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.006 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.006 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.266 13:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.266 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:56.266 { 00:23:56.266 "cntlid": 81, 00:23:56.266 "qid": 0, 00:23:56.266 "state": "enabled", 00:23:56.266 "listen_address": { 00:23:56.266 "trtype": "TCP", 00:23:56.266 "adrfam": "IPv4", 00:23:56.266 "traddr": "10.0.0.2", 00:23:56.266 "trsvcid": "4420" 00:23:56.266 }, 00:23:56.266 "peer_address": { 00:23:56.266 "trtype": "TCP", 00:23:56.266 "adrfam": "IPv4", 00:23:56.266 "traddr": "10.0.0.1", 00:23:56.266 "trsvcid": "54908" 00:23:56.266 }, 00:23:56.266 "auth": { 00:23:56.266 "state": "completed", 00:23:56.266 "digest": "sha384", 00:23:56.266 "dhgroup": "ffdhe6144" 00:23:56.266 } 00:23:56.266 } 00:23:56.266 ]' 00:23:56.266 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:56.266 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:56.266 13:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:56.266 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:56.266 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:56.266 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.266 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.266 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.526 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:23:57.094 13:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:57.353 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.612 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.872 00:23:57.872 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.872 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.872 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:58.131 { 00:23:58.131 "cntlid": 83, 00:23:58.131 "qid": 0, 00:23:58.131 "state": "enabled", 00:23:58.131 "listen_address": { 00:23:58.131 "trtype": "TCP", 00:23:58.131 "adrfam": "IPv4", 00:23:58.131 "traddr": "10.0.0.2", 00:23:58.131 "trsvcid": "4420" 00:23:58.131 }, 00:23:58.131 "peer_address": { 00:23:58.131 "trtype": "TCP", 00:23:58.131 "adrfam": "IPv4", 00:23:58.131 "traddr": "10.0.0.1", 00:23:58.131 "trsvcid": "54946" 00:23:58.131 }, 00:23:58.131 "auth": { 00:23:58.131 "state": "completed", 00:23:58.131 "digest": "sha384", 00:23:58.131 "dhgroup": "ffdhe6144" 00:23:58.131 } 00:23:58.131 } 00:23:58.131 ]' 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:58.131 13:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:58.131 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.434 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:59.372 13:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:59.372 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:59.372 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:59.372 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:59.372 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.373 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.941 00:23:59.941 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:59.941 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:59.941 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.941 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:00.200 { 00:24:00.200 "cntlid": 85, 00:24:00.200 "qid": 0, 00:24:00.200 "state": "enabled", 00:24:00.200 "listen_address": { 00:24:00.200 "trtype": "TCP", 00:24:00.200 "adrfam": "IPv4", 00:24:00.200 "traddr": "10.0.0.2", 00:24:00.200 "trsvcid": "4420" 00:24:00.200 }, 00:24:00.200 "peer_address": { 00:24:00.200 "trtype": "TCP", 00:24:00.200 "adrfam": "IPv4", 00:24:00.200 "traddr": "10.0.0.1", 00:24:00.200 "trsvcid": "54972" 00:24:00.200 }, 00:24:00.200 "auth": { 00:24:00.200 "state": "completed", 00:24:00.200 "digest": "sha384", 00:24:00.200 "dhgroup": "ffdhe6144" 00:24:00.200 } 00:24:00.200 } 00:24:00.200 ]' 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:00.200 13:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:00.200 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.200 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.200 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.459 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:01.029 13:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:01.288 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:01.857 00:24:01.857 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:01.857 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:01.857 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:02.116 { 00:24:02.116 "cntlid": 87, 00:24:02.116 "qid": 0, 00:24:02.116 "state": "enabled", 00:24:02.116 "listen_address": { 00:24:02.116 "trtype": "TCP", 00:24:02.116 "adrfam": "IPv4", 00:24:02.116 "traddr": "10.0.0.2", 00:24:02.116 "trsvcid": "4420" 00:24:02.116 }, 00:24:02.116 "peer_address": { 00:24:02.116 "trtype": "TCP", 00:24:02.116 "adrfam": "IPv4", 00:24:02.116 "traddr": "10.0.0.1", 00:24:02.116 "trsvcid": "51492" 00:24:02.116 }, 00:24:02.116 "auth": { 00:24:02.116 "state": "completed", 00:24:02.116 "digest": "sha384", 00:24:02.116 "dhgroup": "ffdhe6144" 00:24:02.116 } 00:24:02.116 } 00:24:02.116 ]' 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:02.116 13:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.375 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:03.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:03.311 13:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.311 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.879 00:24:03.879 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:03.879 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:03.879 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.138 { 00:24:04.138 "cntlid": 89, 00:24:04.138 "qid": 0, 00:24:04.138 "state": "enabled", 00:24:04.138 "listen_address": { 00:24:04.138 "trtype": "TCP", 00:24:04.138 "adrfam": "IPv4", 00:24:04.138 "traddr": "10.0.0.2", 00:24:04.138 "trsvcid": "4420" 00:24:04.138 }, 00:24:04.138 "peer_address": { 00:24:04.138 "trtype": "TCP", 00:24:04.138 "adrfam": "IPv4", 00:24:04.138 "traddr": "10.0.0.1", 00:24:04.138 "trsvcid": "51518" 00:24:04.138 }, 00:24:04.138 "auth": { 00:24:04.138 "state": "completed", 00:24:04.138 "digest": "sha384", 00:24:04.138 "dhgroup": "ffdhe8192" 00:24:04.138 } 00:24:04.138 } 00:24:04.138 ]' 00:24:04.138 13:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.138 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:04.138 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.398 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:04.398 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:04.398 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.398 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.398 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.657 13:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.226 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.484 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.485 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:06.052 00:24:06.052 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.052 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.052 13:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.311 { 00:24:06.311 "cntlid": 91, 00:24:06.311 "qid": 0, 00:24:06.311 "state": "enabled", 00:24:06.311 "listen_address": { 00:24:06.311 "trtype": "TCP", 00:24:06.311 "adrfam": "IPv4", 00:24:06.311 "traddr": "10.0.0.2", 00:24:06.311 "trsvcid": "4420" 00:24:06.311 }, 00:24:06.311 "peer_address": { 00:24:06.311 "trtype": "TCP", 00:24:06.311 "adrfam": "IPv4", 00:24:06.311 "traddr": "10.0.0.1", 00:24:06.311 "trsvcid": "51548" 00:24:06.311 }, 00:24:06.311 "auth": { 00:24:06.311 "state": "completed", 00:24:06.311 "digest": "sha384", 00:24:06.311 "dhgroup": "ffdhe8192" 00:24:06.311 } 00:24:06.311 } 00:24:06.311 ]' 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:06.311 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.570 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:06.570 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.570 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.570 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.570 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:06.829 13:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:07.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.397 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.656 13:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.223 00:24:08.223 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.224 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.224 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.481 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.481 { 00:24:08.481 "cntlid": 93, 00:24:08.481 "qid": 0, 00:24:08.481 "state": "enabled", 00:24:08.481 "listen_address": { 00:24:08.481 "trtype": "TCP", 00:24:08.481 "adrfam": "IPv4", 00:24:08.481 "traddr": "10.0.0.2", 00:24:08.481 "trsvcid": "4420" 00:24:08.482 }, 00:24:08.482 "peer_address": { 00:24:08.482 "trtype": "TCP", 00:24:08.482 "adrfam": "IPv4", 00:24:08.482 "traddr": "10.0.0.1", 00:24:08.482 "trsvcid": "51590" 00:24:08.482 }, 00:24:08.482 "auth": { 00:24:08.482 "state": "completed", 00:24:08.482 "digest": "sha384", 00:24:08.482 "dhgroup": "ffdhe8192" 00:24:08.482 } 00:24:08.482 } 00:24:08.482 ]' 00:24:08.482 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.482 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:08.482 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.482 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:08.482 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.741 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.741 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.741 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.000 13:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.570 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.829 13:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:10.397 00:24:10.397 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:10.397 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.397 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:10.656 { 00:24:10.656 "cntlid": 95, 00:24:10.656 "qid": 0, 00:24:10.656 "state": "enabled", 00:24:10.656 "listen_address": { 00:24:10.656 "trtype": "TCP", 00:24:10.656 "adrfam": "IPv4", 00:24:10.656 "traddr": "10.0.0.2", 00:24:10.656 "trsvcid": "4420" 00:24:10.656 }, 00:24:10.656 "peer_address": { 00:24:10.656 "trtype": "TCP", 00:24:10.656 "adrfam": "IPv4", 00:24:10.656 "traddr": "10.0.0.1", 00:24:10.656 "trsvcid": "51620" 00:24:10.656 }, 00:24:10.656 "auth": { 00:24:10.656 "state": "completed", 00:24:10.656 "digest": "sha384", 00:24:10.656 "dhgroup": "ffdhe8192" 00:24:10.656 } 00:24:10.656 } 00:24:10.656 ]' 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.656 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.915 13:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.854 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.113 00:24:12.113 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:12.113 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:12.113 13:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:12.373 { 00:24:12.373 "cntlid": 97, 00:24:12.373 "qid": 0, 00:24:12.373 "state": "enabled", 00:24:12.373 "listen_address": { 00:24:12.373 "trtype": "TCP", 00:24:12.373 "adrfam": "IPv4", 00:24:12.373 "traddr": "10.0.0.2", 00:24:12.373 "trsvcid": "4420" 00:24:12.373 }, 00:24:12.373 "peer_address": { 00:24:12.373 "trtype": "TCP", 00:24:12.373 "adrfam": "IPv4", 00:24:12.373 "traddr": "10.0.0.1", 00:24:12.373 "trsvcid": "40186" 00:24:12.373 }, 00:24:12.373 "auth": { 00:24:12.373 "state": "completed", 00:24:12.373 "digest": "sha512", 00:24:12.373 "dhgroup": "null" 00:24:12.373 } 00:24:12.373 } 00:24:12.373 ]' 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:12.373 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:12.633 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.633 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.633 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.892 13:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:13.460 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.719 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.979 00:24:13.979 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:13.979 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:13.979 13:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:14.238 { 00:24:14.238 "cntlid": 99, 00:24:14.238 "qid": 0, 00:24:14.238 "state": "enabled", 00:24:14.238 "listen_address": { 00:24:14.238 "trtype": "TCP", 00:24:14.238 "adrfam": "IPv4", 00:24:14.238 "traddr": "10.0.0.2", 00:24:14.238 "trsvcid": "4420" 00:24:14.238 }, 00:24:14.238 "peer_address": { 00:24:14.238 "trtype": "TCP", 00:24:14.238 "adrfam": "IPv4", 00:24:14.238 "traddr": "10.0.0.1", 00:24:14.238 "trsvcid": "40216" 00:24:14.238 }, 00:24:14.238 "auth": { 00:24:14.238 "state": "completed", 00:24:14.238 "digest": "sha512", 00:24:14.238 "dhgroup": "null" 00:24:14.238 } 00:24:14.238 } 00:24:14.238 ]' 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:14.238 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.497 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.497 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.497 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.497 13:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:15.435 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.760 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.019 00:24:16.019 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:16.019 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:16.019 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.019 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:16.279 { 00:24:16.279 "cntlid": 101, 00:24:16.279 "qid": 0, 00:24:16.279 "state": "enabled", 00:24:16.279 "listen_address": { 00:24:16.279 "trtype": "TCP", 00:24:16.279 "adrfam": "IPv4", 00:24:16.279 "traddr": "10.0.0.2", 00:24:16.279 "trsvcid": "4420" 00:24:16.279 }, 00:24:16.279 "peer_address": { 00:24:16.279 "trtype": "TCP", 00:24:16.279 "adrfam": "IPv4", 00:24:16.279 "traddr": "10.0.0.1", 00:24:16.279 "trsvcid": "40230" 00:24:16.279 }, 00:24:16.279 "auth": { 00:24:16.279 "state": "completed", 00:24:16.279 "digest": "sha512", 00:24:16.279 "dhgroup": "null" 00:24:16.279 } 00:24:16.279 } 00:24:16.279 ]' 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.279 13:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.279 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:16.279 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.279 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.279 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.279 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.538 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:17.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:17.105 13:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.365 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.625 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:17.884 { 00:24:17.884 "cntlid": 103, 00:24:17.884 "qid": 0, 00:24:17.884 "state": "enabled", 00:24:17.884 "listen_address": { 00:24:17.884 "trtype": "TCP", 00:24:17.884 "adrfam": "IPv4", 00:24:17.884 "traddr": "10.0.0.2", 00:24:17.884 "trsvcid": "4420" 00:24:17.884 }, 00:24:17.884 "peer_address": { 00:24:17.884 "trtype": "TCP", 00:24:17.884 "adrfam": "IPv4", 00:24:17.884 "traddr": "10.0.0.1", 00:24:17.884 "trsvcid": "40268" 00:24:17.884 }, 00:24:17.884 "auth": { 00:24:17.884 "state": "completed", 00:24:17.884 "digest": "sha512", 00:24:17.884 "dhgroup": "null" 00:24:17.884 } 00:24:17.884 } 00:24:17.884 ]' 00:24:17.884 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:18.144 13:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.403 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:18.972 13:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.231 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.490 00:24:19.490 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:19.490 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:19.490 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:19.748 { 00:24:19.748 "cntlid": 105, 00:24:19.748 "qid": 0, 00:24:19.748 "state": "enabled", 00:24:19.748 "listen_address": { 00:24:19.748 "trtype": "TCP", 00:24:19.748 "adrfam": "IPv4", 00:24:19.748 "traddr": "10.0.0.2", 00:24:19.748 "trsvcid": "4420" 00:24:19.748 }, 00:24:19.748 "peer_address": { 00:24:19.748 "trtype": "TCP", 00:24:19.748 "adrfam": "IPv4", 00:24:19.748 "traddr": "10.0.0.1", 00:24:19.748 "trsvcid": "40292" 00:24:19.748 }, 00:24:19.748 "auth": { 00:24:19.748 "state": "completed", 00:24:19.748 "digest": "sha512", 00:24:19.748 "dhgroup": "ffdhe2048" 00:24:19.748 } 00:24:19.748 } 00:24:19.748 ]' 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:19.748 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:20.007 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:20.007 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:20.007 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.007 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.007 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.265 13:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:20.831 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.090 13:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.349 00:24:21.349 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:21.349 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.349 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:21.607 { 00:24:21.607 "cntlid": 107, 00:24:21.607 "qid": 0, 00:24:21.607 "state": "enabled", 00:24:21.607 "listen_address": { 00:24:21.607 "trtype": "TCP", 00:24:21.607 "adrfam": "IPv4", 00:24:21.607 "traddr": "10.0.0.2", 00:24:21.607 "trsvcid": "4420" 00:24:21.607 }, 00:24:21.607 "peer_address": { 00:24:21.607 "trtype": "TCP", 00:24:21.607 "adrfam": "IPv4", 00:24:21.607 "traddr": "10.0.0.1", 00:24:21.607 "trsvcid": "42120" 00:24:21.607 }, 00:24:21.607 "auth": { 00:24:21.607 "state": "completed", 00:24:21.607 "digest": "sha512", 00:24:21.607 "dhgroup": "ffdhe2048" 00:24:21.607 } 00:24:21.607 } 00:24:21.607 ]' 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:21.607 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:21.865 13:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:22.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.798 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.056 13:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.056 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.056 13:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.315 00:24:23.315 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:23.315 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:23.315 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:23.608 { 00:24:23.608 "cntlid": 109, 00:24:23.608 "qid": 0, 00:24:23.608 "state": "enabled", 00:24:23.608 "listen_address": { 00:24:23.608 "trtype": "TCP", 00:24:23.608 "adrfam": "IPv4", 00:24:23.608 "traddr": "10.0.0.2", 00:24:23.608 "trsvcid": "4420" 00:24:23.608 }, 00:24:23.608 "peer_address": { 00:24:23.608 "trtype": "TCP", 00:24:23.608 "adrfam": "IPv4", 00:24:23.608 "traddr": "10.0.0.1", 00:24:23.608 "trsvcid": "42136" 00:24:23.608 }, 00:24:23.608 "auth": { 00:24:23.608 "state": "completed", 00:24:23.608 "digest": "sha512", 00:24:23.608 "dhgroup": "ffdhe2048" 00:24:23.608 } 00:24:23.608 } 00:24:23.608 ]' 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:23.608 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:23.866 13:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:24.432 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.432 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:24.432 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.433 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.433 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.433 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:24.433 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.433 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.692 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:24.950 00:24:24.950 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.950 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.950 13:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:25.210 { 00:24:25.210 "cntlid": 111, 00:24:25.210 "qid": 0, 00:24:25.210 "state": "enabled", 00:24:25.210 "listen_address": { 00:24:25.210 "trtype": "TCP", 00:24:25.210 "adrfam": "IPv4", 00:24:25.210 "traddr": "10.0.0.2", 00:24:25.210 "trsvcid": "4420" 00:24:25.210 }, 00:24:25.210 "peer_address": { 00:24:25.210 "trtype": "TCP", 00:24:25.210 "adrfam": "IPv4", 00:24:25.210 "traddr": "10.0.0.1", 00:24:25.210 "trsvcid": "42156" 00:24:25.210 }, 00:24:25.210 "auth": { 00:24:25.210 "state": "completed", 00:24:25.210 "digest": "sha512", 00:24:25.210 "dhgroup": "ffdhe2048" 00:24:25.210 } 00:24:25.210 } 00:24:25.210 ]' 00:24:25.210 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.469 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.728 13:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.296 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.556 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.125 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:27.125 { 00:24:27.125 "cntlid": 113, 00:24:27.125 "qid": 0, 00:24:27.125 "state": "enabled", 00:24:27.125 "listen_address": { 00:24:27.125 "trtype": "TCP", 00:24:27.125 "adrfam": "IPv4", 00:24:27.125 "traddr": "10.0.0.2", 00:24:27.125 "trsvcid": "4420" 00:24:27.125 }, 00:24:27.125 "peer_address": { 00:24:27.125 "trtype": "TCP", 00:24:27.125 "adrfam": "IPv4", 00:24:27.125 "traddr": "10.0.0.1", 00:24:27.125 "trsvcid": "42198" 00:24:27.125 }, 00:24:27.125 "auth": { 00:24:27.125 "state": "completed", 00:24:27.125 "digest": "sha512", 00:24:27.125 "dhgroup": "ffdhe3072" 00:24:27.125 } 00:24:27.125 } 00:24:27.125 ]' 00:24:27.125 13:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.384 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.641 13:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:28.206 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.464 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.721 00:24:28.721 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:28.721 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:28.721 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.979 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:28.980 { 00:24:28.980 "cntlid": 115, 00:24:28.980 "qid": 0, 00:24:28.980 "state": "enabled", 00:24:28.980 "listen_address": { 00:24:28.980 "trtype": "TCP", 00:24:28.980 "adrfam": "IPv4", 00:24:28.980 "traddr": "10.0.0.2", 00:24:28.980 "trsvcid": "4420" 00:24:28.980 }, 00:24:28.980 "peer_address": { 00:24:28.980 "trtype": "TCP", 00:24:28.980 "adrfam": "IPv4", 00:24:28.980 "traddr": "10.0.0.1", 00:24:28.980 "trsvcid": "42212" 00:24:28.980 }, 00:24:28.980 "auth": { 00:24:28.980 "state": "completed", 00:24:28.980 "digest": "sha512", 00:24:28.980 "dhgroup": "ffdhe3072" 00:24:28.980 } 00:24:28.980 } 00:24:28.980 ]' 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:28.980 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:29.237 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:29.237 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:29.237 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.237 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.237 13:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.494 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:30.060 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:30.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.061 13:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.319 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.577 00:24:30.577 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:30.577 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:30.577 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.835 { 00:24:30.835 "cntlid": 117, 00:24:30.835 "qid": 0, 00:24:30.835 "state": "enabled", 00:24:30.835 "listen_address": { 00:24:30.835 "trtype": "TCP", 00:24:30.835 "adrfam": "IPv4", 00:24:30.835 "traddr": "10.0.0.2", 00:24:30.835 "trsvcid": "4420" 00:24:30.835 }, 00:24:30.835 "peer_address": { 00:24:30.835 "trtype": "TCP", 00:24:30.835 "adrfam": "IPv4", 00:24:30.835 "traddr": "10.0.0.1", 00:24:30.835 "trsvcid": "42244" 00:24:30.835 }, 00:24:30.835 "auth": { 00:24:30.835 "state": "completed", 00:24:30.835 "digest": "sha512", 00:24:30.835 "dhgroup": "ffdhe3072" 00:24:30.835 } 00:24:30.835 } 00:24:30.835 ]' 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:30.835 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:31.094 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.094 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.094 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:31.094 13:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:32.035 13:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:32.610 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.610 { 00:24:32.610 "cntlid": 119, 00:24:32.610 "qid": 0, 00:24:32.610 "state": "enabled", 00:24:32.610 "listen_address": { 00:24:32.610 "trtype": "TCP", 00:24:32.610 "adrfam": "IPv4", 00:24:32.610 "traddr": "10.0.0.2", 00:24:32.610 "trsvcid": "4420" 00:24:32.610 }, 00:24:32.610 "peer_address": { 00:24:32.610 "trtype": "TCP", 00:24:32.610 "adrfam": "IPv4", 00:24:32.610 "traddr": "10.0.0.1", 00:24:32.610 "trsvcid": "43812" 00:24:32.610 }, 00:24:32.610 "auth": { 00:24:32.610 "state": "completed", 00:24:32.610 "digest": "sha512", 00:24:32.610 "dhgroup": "ffdhe3072" 00:24:32.610 } 00:24:32.610 } 00:24:32.610 ]' 00:24:32.610 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.868 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:33.126 13:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.692 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.950 13:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.516 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:34.516 { 00:24:34.516 "cntlid": 121, 00:24:34.516 "qid": 0, 00:24:34.516 "state": "enabled", 00:24:34.516 "listen_address": { 00:24:34.516 "trtype": "TCP", 00:24:34.516 "adrfam": "IPv4", 00:24:34.516 "traddr": "10.0.0.2", 00:24:34.516 "trsvcid": "4420" 00:24:34.516 }, 00:24:34.516 "peer_address": { 00:24:34.516 "trtype": "TCP", 00:24:34.516 "adrfam": "IPv4", 00:24:34.516 "traddr": "10.0.0.1", 00:24:34.516 "trsvcid": "43836" 00:24:34.516 }, 00:24:34.516 "auth": { 00:24:34.516 "state": "completed", 00:24:34.516 "digest": "sha512", 00:24:34.516 "dhgroup": "ffdhe4096" 00:24:34.516 } 00:24:34.516 } 00:24:34.516 ]' 00:24:34.516 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.774 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.032 13:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:35.599 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.600 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:35.858 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.859 13:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.427 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:36.427 { 00:24:36.427 "cntlid": 123, 00:24:36.427 "qid": 0, 00:24:36.427 "state": "enabled", 00:24:36.427 "listen_address": { 00:24:36.427 "trtype": "TCP", 00:24:36.427 "adrfam": "IPv4", 00:24:36.427 "traddr": "10.0.0.2", 00:24:36.427 "trsvcid": "4420" 00:24:36.427 }, 00:24:36.427 "peer_address": { 00:24:36.427 "trtype": "TCP", 00:24:36.427 "adrfam": "IPv4", 00:24:36.427 "traddr": "10.0.0.1", 00:24:36.427 "trsvcid": "43862" 00:24:36.427 }, 00:24:36.427 "auth": { 00:24:36.427 "state": "completed", 00:24:36.427 "digest": "sha512", 00:24:36.427 "dhgroup": "ffdhe4096" 00:24:36.427 } 00:24:36.427 } 00:24:36.427 ]' 00:24:36.427 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.686 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:36.945 13:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:37.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.520 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.781 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.039 00:24:38.297 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:38.297 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:38.297 13:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:38.297 { 00:24:38.297 "cntlid": 125, 00:24:38.297 "qid": 0, 00:24:38.297 "state": "enabled", 00:24:38.297 "listen_address": { 00:24:38.297 "trtype": "TCP", 00:24:38.297 "adrfam": "IPv4", 00:24:38.297 "traddr": "10.0.0.2", 00:24:38.297 "trsvcid": "4420" 00:24:38.297 }, 00:24:38.297 "peer_address": { 00:24:38.297 "trtype": "TCP", 00:24:38.297 "adrfam": "IPv4", 00:24:38.297 "traddr": "10.0.0.1", 00:24:38.297 "trsvcid": "43884" 00:24:38.297 }, 00:24:38.297 "auth": { 00:24:38.297 "state": "completed", 00:24:38.297 "digest": "sha512", 00:24:38.297 "dhgroup": "ffdhe4096" 00:24:38.297 } 00:24:38.297 } 00:24:38.297 ]' 00:24:38.297 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.554 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.812 13:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:39.374 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:39.631 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.888 13:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:39.888 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:39.888 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:40.146 00:24:40.146 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:40.146 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:40.146 13:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:40.404 { 00:24:40.404 "cntlid": 127, 00:24:40.404 "qid": 0, 00:24:40.404 "state": "enabled", 00:24:40.404 "listen_address": { 00:24:40.404 "trtype": "TCP", 00:24:40.404 "adrfam": "IPv4", 00:24:40.404 "traddr": "10.0.0.2", 00:24:40.404 "trsvcid": "4420" 00:24:40.404 }, 00:24:40.404 "peer_address": { 00:24:40.404 "trtype": "TCP", 00:24:40.404 "adrfam": "IPv4", 00:24:40.404 "traddr": "10.0.0.1", 00:24:40.404 "trsvcid": "43918" 00:24:40.404 }, 00:24:40.404 "auth": { 00:24:40.404 "state": "completed", 00:24:40.404 "digest": "sha512", 00:24:40.404 "dhgroup": "ffdhe4096" 00:24:40.404 } 00:24:40.404 } 00:24:40.404 ]' 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.404 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.662 13:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.595 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.161 00:24:42.161 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:42.161 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:42.161 13:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:42.418 { 00:24:42.418 "cntlid": 129, 00:24:42.418 "qid": 0, 00:24:42.418 "state": "enabled", 00:24:42.418 "listen_address": { 00:24:42.418 "trtype": "TCP", 00:24:42.418 "adrfam": "IPv4", 00:24:42.418 "traddr": "10.0.0.2", 00:24:42.418 "trsvcid": "4420" 00:24:42.418 }, 00:24:42.418 "peer_address": { 00:24:42.418 "trtype": "TCP", 00:24:42.418 "adrfam": "IPv4", 00:24:42.418 "traddr": "10.0.0.1", 00:24:42.418 "trsvcid": "42084" 00:24:42.418 }, 00:24:42.418 "auth": { 00:24:42.418 "state": "completed", 00:24:42.418 "digest": "sha512", 00:24:42.418 "dhgroup": "ffdhe6144" 00:24:42.418 } 00:24:42.418 } 00:24:42.418 ]' 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.418 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.676 13:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:43.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.327 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.585 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.151 00:24:44.151 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:44.151 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:44.151 13:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:44.151 { 00:24:44.151 "cntlid": 131, 00:24:44.151 "qid": 0, 00:24:44.151 "state": "enabled", 00:24:44.151 "listen_address": { 00:24:44.151 "trtype": "TCP", 00:24:44.151 "adrfam": "IPv4", 00:24:44.151 "traddr": "10.0.0.2", 00:24:44.151 "trsvcid": "4420" 00:24:44.151 }, 00:24:44.151 "peer_address": { 00:24:44.151 "trtype": "TCP", 00:24:44.151 "adrfam": "IPv4", 00:24:44.151 "traddr": "10.0.0.1", 00:24:44.151 "trsvcid": "42112" 00:24:44.151 }, 00:24:44.151 "auth": { 00:24:44.151 "state": "completed", 00:24:44.151 "digest": "sha512", 00:24:44.151 "dhgroup": "ffdhe6144" 00:24:44.151 } 00:24:44.151 } 00:24:44.151 ]' 00:24:44.151 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.409 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.667 13:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.233 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.491 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.055 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:46.055 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.312 13:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:46.312 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:46.312 { 00:24:46.312 "cntlid": 133, 00:24:46.312 "qid": 0, 00:24:46.312 "state": "enabled", 00:24:46.312 "listen_address": { 00:24:46.312 "trtype": "TCP", 00:24:46.312 "adrfam": "IPv4", 00:24:46.312 "traddr": "10.0.0.2", 00:24:46.312 "trsvcid": "4420" 00:24:46.312 }, 00:24:46.312 "peer_address": { 00:24:46.312 "trtype": "TCP", 00:24:46.312 "adrfam": "IPv4", 00:24:46.312 "traddr": "10.0.0.1", 00:24:46.312 "trsvcid": "42134" 00:24:46.312 }, 00:24:46.312 "auth": { 00:24:46.312 "state": "completed", 00:24:46.312 "digest": "sha512", 00:24:46.312 "dhgroup": "ffdhe6144" 00:24:46.312 } 00:24:46.312 } 00:24:46.312 ]' 00:24:46.312 13:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.312 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.571 13:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:47.138 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:47.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:47.138 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:47.138 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.138 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:47.397 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:47.964 00:24:47.964 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:47.964 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:47.964 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:48.222 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:48.222 { 00:24:48.222 "cntlid": 135, 00:24:48.222 "qid": 0, 00:24:48.222 "state": "enabled", 00:24:48.222 "listen_address": { 00:24:48.222 "trtype": "TCP", 00:24:48.222 "adrfam": "IPv4", 00:24:48.222 "traddr": "10.0.0.2", 00:24:48.222 "trsvcid": "4420" 00:24:48.222 }, 00:24:48.222 "peer_address": { 00:24:48.222 "trtype": "TCP", 00:24:48.223 "adrfam": "IPv4", 00:24:48.223 "traddr": "10.0.0.1", 00:24:48.223 "trsvcid": "42160" 00:24:48.223 }, 00:24:48.223 "auth": { 00:24:48.223 "state": "completed", 00:24:48.223 "digest": "sha512", 00:24:48.223 "dhgroup": "ffdhe6144" 00:24:48.223 } 00:24:48.223 } 00:24:48.223 ]' 00:24:48.223 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:48.223 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:48.223 13:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:48.223 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:48.223 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:48.223 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.223 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.223 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:48.481 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:49.048 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.048 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:49.048 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.048 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 13:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.320 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.320 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:49.320 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:49.320 13:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.320 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.900 00:24:49.900 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:49.900 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.900 13:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:50.159 { 00:24:50.159 "cntlid": 137, 00:24:50.159 "qid": 0, 00:24:50.159 "state": "enabled", 00:24:50.159 "listen_address": { 00:24:50.159 "trtype": "TCP", 00:24:50.159 "adrfam": "IPv4", 00:24:50.159 "traddr": "10.0.0.2", 00:24:50.159 "trsvcid": "4420" 00:24:50.159 }, 00:24:50.159 "peer_address": { 00:24:50.159 "trtype": "TCP", 00:24:50.159 "adrfam": "IPv4", 00:24:50.159 "traddr": "10.0.0.1", 00:24:50.159 "trsvcid": "42186" 00:24:50.159 }, 00:24:50.159 "auth": { 00:24:50.159 "state": "completed", 00:24:50.159 "digest": "sha512", 00:24:50.159 "dhgroup": "ffdhe8192" 00:24:50.159 } 00:24:50.159 } 00:24:50.159 ]' 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:50.159 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:50.160 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:50.418 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:50.418 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:50.418 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.418 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.418 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.677 13:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:51.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:51.245 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.504 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.072 00:24:52.072 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:52.072 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:52.072 13:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.330 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:52.330 { 00:24:52.330 "cntlid": 139, 00:24:52.330 "qid": 0, 00:24:52.330 "state": "enabled", 00:24:52.330 "listen_address": { 00:24:52.330 "trtype": "TCP", 00:24:52.330 "adrfam": "IPv4", 00:24:52.330 "traddr": "10.0.0.2", 00:24:52.330 "trsvcid": "4420" 00:24:52.330 }, 00:24:52.331 "peer_address": { 00:24:52.331 "trtype": "TCP", 00:24:52.331 "adrfam": "IPv4", 00:24:52.331 "traddr": "10.0.0.1", 00:24:52.331 "trsvcid": "41780" 00:24:52.331 }, 00:24:52.331 "auth": { 00:24:52.331 "state": "completed", 00:24:52.331 "digest": "sha512", 00:24:52.331 "dhgroup": "ffdhe8192" 00:24:52.331 } 00:24:52.331 } 00:24:52.331 ]' 00:24:52.331 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:52.331 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:52.331 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:52.589 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:52.589 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:52.589 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:52.589 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:52.589 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:52.848 13:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZTNhYzNiNTY4MjI2ZTExNjE1NzhiYjU4YWU5MGQ1YTeg3EPm: --dhchap-ctrl-secret DHHC-1:02:ODgyZTBhYzU1NjBhOGQxNTBkNzU5NzQ0NTFmN2FmNDU2N2UzYjIxOWE2ZGUyZjY0OhyI/A==: 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:53.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.414 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:53.672 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.673 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.673 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.673 13:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.673 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.673 13:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.238 00:24:54.238 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:54.238 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:54.238 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:54.496 { 00:24:54.496 "cntlid": 141, 00:24:54.496 "qid": 0, 00:24:54.496 "state": "enabled", 00:24:54.496 "listen_address": { 00:24:54.496 "trtype": "TCP", 00:24:54.496 "adrfam": "IPv4", 00:24:54.496 "traddr": "10.0.0.2", 00:24:54.496 "trsvcid": "4420" 00:24:54.496 }, 00:24:54.496 "peer_address": { 00:24:54.496 "trtype": "TCP", 00:24:54.496 "adrfam": "IPv4", 00:24:54.496 "traddr": "10.0.0.1", 00:24:54.496 "trsvcid": "41804" 00:24:54.496 }, 00:24:54.496 "auth": { 00:24:54.496 "state": "completed", 00:24:54.496 "digest": "sha512", 00:24:54.496 "dhgroup": "ffdhe8192" 00:24:54.496 } 00:24:54.496 } 00:24:54.496 ]' 00:24:54.496 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:54.754 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:55.012 13:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTI0ZGNmNzVlMWRmZDMxMjY3OWE0NTBjYzNiNGVlYmRmMzZhZWNhNzIyMzFjMWQzVWcNcg==: --dhchap-ctrl-secret DHHC-1:01:YTU0NzhlZjIxNWViMmI0Njk1ODllMDY0OTY4YjExMjdLmHJ8: 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.579 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:55.837 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:24:55.837 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:55.838 13:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:56.404 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:56.662 { 00:24:56.662 "cntlid": 143, 00:24:56.662 "qid": 0, 00:24:56.662 "state": "enabled", 00:24:56.662 "listen_address": { 00:24:56.662 "trtype": "TCP", 00:24:56.662 "adrfam": "IPv4", 00:24:56.662 "traddr": "10.0.0.2", 00:24:56.662 "trsvcid": "4420" 00:24:56.662 }, 00:24:56.662 "peer_address": { 00:24:56.662 "trtype": "TCP", 00:24:56.662 "adrfam": "IPv4", 00:24:56.662 "traddr": "10.0.0.1", 00:24:56.662 "trsvcid": "41834" 00:24:56.662 }, 00:24:56.662 "auth": { 00:24:56.662 "state": "completed", 00:24:56.662 "digest": "sha512", 00:24:56.662 "dhgroup": "ffdhe8192" 00:24:56.662 } 00:24:56.662 } 00:24:56.662 ]' 00:24:56.662 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.920 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.178 13:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.745 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.004 13:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.571 00:24:58.571 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:58.571 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.571 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:58.829 { 00:24:58.829 "cntlid": 145, 00:24:58.829 "qid": 0, 00:24:58.829 "state": "enabled", 00:24:58.829 "listen_address": { 00:24:58.829 "trtype": "TCP", 00:24:58.829 "adrfam": "IPv4", 00:24:58.829 "traddr": "10.0.0.2", 00:24:58.829 "trsvcid": "4420" 00:24:58.829 }, 00:24:58.829 "peer_address": { 00:24:58.829 "trtype": "TCP", 00:24:58.829 "adrfam": "IPv4", 00:24:58.829 "traddr": "10.0.0.1", 00:24:58.829 "trsvcid": "41868" 00:24:58.829 }, 00:24:58.829 "auth": { 00:24:58.829 "state": "completed", 00:24:58.829 "digest": "sha512", 00:24:58.829 "dhgroup": "ffdhe8192" 00:24:58.829 } 00:24:58.829 } 00:24:58.829 ]' 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:58.829 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:59.087 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:59.087 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:59.087 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:59.087 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:59.087 13:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.447 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:Mjc4OTM3M2FiYWRmZmRlOWVhMzgwZTVhYmY2OTg4MTU1ZmZiNjg0MmZkMmE3OTQwxc4DTw==: --dhchap-ctrl-secret DHHC-1:03:ZmE0M2JlYTdiNmIzYWVlZjBkMDBkYWFkM2QyZWYyMDcwZGZjNzliZGFmMTQ4NmVmMjA5OWUxY2ZhMDc4MmRmNM1gpOU=: 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:00.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:00.012 13:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:00.578 request: 00:25:00.578 { 00:25:00.578 "name": "nvme0", 00:25:00.578 "trtype": "tcp", 00:25:00.578 "traddr": "10.0.0.2", 00:25:00.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:00.578 "adrfam": "ipv4", 00:25:00.578 "trsvcid": "4420", 00:25:00.578 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:00.578 "dhchap_key": "key2", 00:25:00.578 "method": "bdev_nvme_attach_controller", 00:25:00.578 "req_id": 1 00:25:00.578 } 00:25:00.578 Got JSON-RPC error response 00:25:00.578 response: 00:25:00.578 { 00:25:00.578 "code": -5, 00:25:00.578 "message": "Input/output error" 00:25:00.578 } 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:00.578 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:01.145 request: 00:25:01.145 { 00:25:01.145 "name": "nvme0", 00:25:01.145 "trtype": "tcp", 00:25:01.145 "traddr": "10.0.0.2", 00:25:01.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:01.145 "adrfam": "ipv4", 00:25:01.145 "trsvcid": "4420", 00:25:01.145 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:01.145 "dhchap_key": "key1", 00:25:01.145 "dhchap_ctrlr_key": "ckey2", 00:25:01.145 "method": "bdev_nvme_attach_controller", 00:25:01.145 "req_id": 1 00:25:01.145 } 00:25:01.145 Got JSON-RPC error response 00:25:01.145 response: 00:25:01.145 { 00:25:01.145 "code": -5, 00:25:01.145 "message": "Input/output error" 00:25:01.145 } 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.145 13:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.712 request: 00:25:01.712 { 00:25:01.712 "name": "nvme0", 00:25:01.712 "trtype": "tcp", 00:25:01.712 "traddr": "10.0.0.2", 00:25:01.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:01.712 "adrfam": "ipv4", 00:25:01.712 "trsvcid": "4420", 00:25:01.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:01.712 "dhchap_key": "key1", 00:25:01.712 "dhchap_ctrlr_key": "ckey1", 00:25:01.712 "method": "bdev_nvme_attach_controller", 00:25:01.712 "req_id": 1 00:25:01.712 } 00:25:01.712 Got JSON-RPC error response 00:25:01.712 response: 00:25:01.712 { 00:25:01.712 "code": -5, 00:25:01.712 "message": "Input/output error" 00:25:01.712 } 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1442876 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1442876 ']' 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1442876 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:01.712 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1442876 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1442876' 00:25:01.971 killing process with pid 1442876 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1442876 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1442876 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1468888 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1468888 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1468888 ']' 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:01.971 13:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1468888 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1468888 ']' 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:02.906 13:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.165 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:03.165 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:25:03.165 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:25:03.165 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.165 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:03.424 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:03.991 00:25:03.991 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:03.991 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.991 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:04.249 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.249 13:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:04.249 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:04.249 13:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:04.249 { 00:25:04.249 "cntlid": 1, 00:25:04.249 "qid": 0, 00:25:04.249 "state": "enabled", 00:25:04.249 "listen_address": { 00:25:04.249 "trtype": "TCP", 00:25:04.249 "adrfam": "IPv4", 00:25:04.249 "traddr": "10.0.0.2", 00:25:04.249 "trsvcid": "4420" 00:25:04.249 }, 00:25:04.249 "peer_address": { 00:25:04.249 "trtype": "TCP", 00:25:04.249 "adrfam": "IPv4", 00:25:04.249 "traddr": "10.0.0.1", 00:25:04.249 "trsvcid": "35848" 00:25:04.249 }, 00:25:04.249 "auth": { 00:25:04.249 "state": "completed", 00:25:04.249 "digest": "sha512", 00:25:04.249 "dhgroup": "ffdhe8192" 00:25:04.249 } 00:25:04.249 } 00:25:04.249 ]' 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:04.249 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.508 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:Mjk4OTJmNTE5Njc2NjE4MDI5NDY2NGE1MjhlZmQzZTkyMzk0ODcwZWNiZWU3MTY4MzFiYmNjY2JhNmE2OGM5ZYOViUI=: 00:25:05.443 13:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:05.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.443 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.702 request: 00:25:05.702 { 00:25:05.702 "name": "nvme0", 00:25:05.702 "trtype": "tcp", 00:25:05.702 "traddr": "10.0.0.2", 00:25:05.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:05.702 "adrfam": "ipv4", 00:25:05.702 "trsvcid": "4420", 00:25:05.702 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:05.702 "dhchap_key": "key3", 00:25:05.702 "method": "bdev_nvme_attach_controller", 00:25:05.702 "req_id": 1 00:25:05.702 } 00:25:05.702 Got JSON-RPC error response 00:25:05.702 response: 00:25:05.702 { 00:25:05.702 "code": -5, 00:25:05.702 "message": "Input/output error" 00:25:05.702 } 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:05.702 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:05.961 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:06.220 request: 00:25:06.220 { 00:25:06.220 "name": "nvme0", 00:25:06.220 "trtype": "tcp", 00:25:06.220 "traddr": "10.0.0.2", 00:25:06.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:06.220 "adrfam": "ipv4", 00:25:06.220 "trsvcid": "4420", 00:25:06.220 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:06.220 "dhchap_key": "key3", 00:25:06.220 "method": "bdev_nvme_attach_controller", 00:25:06.220 "req_id": 1 00:25:06.220 } 00:25:06.220 Got JSON-RPC error response 00:25:06.220 response: 00:25:06.220 { 00:25:06.220 "code": -5, 00:25:06.220 "message": "Input/output error" 00:25:06.220 } 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:06.220 13:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:06.479 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:25:06.737 request: 00:25:06.737 { 00:25:06.737 "name": "nvme0", 00:25:06.737 "trtype": "tcp", 00:25:06.737 "traddr": "10.0.0.2", 00:25:06.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:25:06.737 "adrfam": "ipv4", 00:25:06.737 "trsvcid": "4420", 00:25:06.737 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:06.737 "dhchap_key": "key0", 00:25:06.737 "dhchap_ctrlr_key": "key1", 00:25:06.737 "method": "bdev_nvme_attach_controller", 00:25:06.737 "req_id": 1 00:25:06.737 } 00:25:06.737 Got JSON-RPC error response 00:25:06.737 response: 00:25:06.737 { 00:25:06.737 "code": -5, 00:25:06.737 "message": "Input/output error" 00:25:06.737 } 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:06.737 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:06.995 00:25:06.995 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:25:06.995 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:25:06.995 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.254 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.254 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.254 13:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1443154 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1443154 ']' 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1443154 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1443154 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1443154' 00:25:07.512 killing process with pid 1443154 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1443154 00:25:07.512 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1443154 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.770 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.770 rmmod nvme_tcp 00:25:07.770 rmmod nvme_fabrics 00:25:07.770 rmmod nvme_keyring 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1468888 ']' 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1468888 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1468888 ']' 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1468888 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1468888 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1468888' 00:25:08.029 killing process with pid 1468888 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1468888 00:25:08.029 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1468888 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.288 13:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.194 13:53:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.194 13:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ceK /tmp/spdk.key-sha256.swE /tmp/spdk.key-sha384.0Qh /tmp/spdk.key-sha512.0Zo /tmp/spdk.key-sha512.RGJ /tmp/spdk.key-sha384.YJ7 /tmp/spdk.key-sha256.fMf '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:25:10.194 00:25:10.194 real 2m41.792s 00:25:10.194 user 6m5.399s 00:25:10.194 sys 0m33.025s 00:25:10.194 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:10.194 13:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.194 ************************************ 00:25:10.194 END TEST nvmf_auth_target 00:25:10.194 ************************************ 00:25:10.194 13:53:03 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:25:10.194 13:53:03 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:10.194 13:53:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:25:10.195 13:53:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:10.195 13:53:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.453 ************************************ 00:25:10.453 START TEST nvmf_bdevio_no_huge 00:25:10.453 ************************************ 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:10.453 * Looking for test storage... 00:25:10.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.453 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.454 13:53:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:17.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:17.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:17.020 Found net devices under 0000:af:00.0: cvl_0_0 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.020 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:17.020 Found net devices under 0000:af:00.1: cvl_0_1 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:17.021 13:53:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:17.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:25:17.280 00:25:17.280 --- 10.0.0.2 ping statistics --- 00:25:17.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.280 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:17.280 00:25:17.280 --- 10.0.0.1 ping statistics --- 00:25:17.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.280 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1473687 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1473687 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 1473687 ']' 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:17.280 13:53:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:17.280 [2024-06-11 13:53:10.133884] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:17.280 [2024-06-11 13:53:10.133949] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:17.539 [2024-06-11 13:53:10.238345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.539 [2024-06-11 13:53:10.370284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.539 [2024-06-11 13:53:10.370324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.539 [2024-06-11 13:53:10.370337] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.539 [2024-06-11 13:53:10.370349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.539 [2024-06-11 13:53:10.370359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.539 [2024-06-11 13:53:10.370500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:25:17.539 [2024-06-11 13:53:10.370576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:25:17.539 [2024-06-11 13:53:10.370683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.539 [2024-06-11 13:53:10.370683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:25:18.473 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 [2024-06-11 13:53:11.073704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 Malloc0 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:18.474 [2024-06-11 13:53:11.114190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:18.474 { 00:25:18.474 "params": { 00:25:18.474 "name": "Nvme$subsystem", 00:25:18.474 "trtype": "$TEST_TRANSPORT", 00:25:18.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:18.474 "adrfam": "ipv4", 00:25:18.474 "trsvcid": "$NVMF_PORT", 00:25:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:18.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:18.474 "hdgst": ${hdgst:-false}, 00:25:18.474 "ddgst": ${ddgst:-false} 00:25:18.474 }, 00:25:18.474 "method": "bdev_nvme_attach_controller" 00:25:18.474 } 00:25:18.474 EOF 00:25:18.474 )") 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:25:18.474 13:53:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:18.474 "params": { 00:25:18.474 "name": "Nvme1", 00:25:18.474 "trtype": "tcp", 00:25:18.474 "traddr": "10.0.0.2", 00:25:18.474 "adrfam": "ipv4", 00:25:18.474 "trsvcid": "4420", 00:25:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:18.474 "hdgst": false, 00:25:18.474 "ddgst": false 00:25:18.474 }, 00:25:18.474 "method": "bdev_nvme_attach_controller" 00:25:18.474 }' 00:25:18.474 [2024-06-11 13:53:11.145692] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:18.474 [2024-06-11 13:53:11.145741] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1473746 ] 00:25:18.474 [2024-06-11 13:53:11.239671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.474 [2024-06-11 13:53:11.371247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.474 [2024-06-11 13:53:11.371342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.474 [2024-06-11 13:53:11.371347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.041 I/O targets: 00:25:19.041 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:19.041 00:25:19.041 00:25:19.041 CUnit - A unit testing framework for C - Version 2.1-3 00:25:19.041 http://cunit.sourceforge.net/ 00:25:19.041 00:25:19.041 00:25:19.041 Suite: bdevio tests on: Nvme1n1 00:25:19.041 Test: blockdev write read block ...passed 00:25:19.041 Test: blockdev write zeroes read block ...passed 00:25:19.041 Test: blockdev write zeroes read no split ...passed 00:25:19.041 Test: blockdev write zeroes read split ...passed 00:25:19.041 Test: blockdev write zeroes read split partial ...passed 00:25:19.041 Test: blockdev reset ...[2024-06-11 13:53:11.890926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.041 [2024-06-11 13:53:11.890999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ec150 (9): Bad file descriptor 00:25:19.041 [2024-06-11 13:53:11.907485] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.041 passed 00:25:19.041 Test: blockdev write read 8 blocks ...passed 00:25:19.041 Test: blockdev write read size > 128k ...passed 00:25:19.041 Test: blockdev write read invalid size ...passed 00:25:19.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:19.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:19.300 Test: blockdev write read max offset ...passed 00:25:19.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:19.300 Test: blockdev writev readv 8 blocks ...passed 00:25:19.300 Test: blockdev writev readv 30 x 1block ...passed 00:25:19.300 Test: blockdev writev readv block ...passed 00:25:19.300 Test: blockdev writev readv size > 128k ...passed 00:25:19.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:19.300 Test: blockdev comparev and writev ...[2024-06-11 13:53:12.122775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.122805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.122820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.122830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.123193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.123216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.123592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.123616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.123975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.123989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:19.300 [2024-06-11 13:53:12.124000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:19.300 passed 00:25:19.300 Test: blockdev nvme passthru rw ...passed 00:25:19.300 Test: blockdev nvme passthru vendor specific ...[2024-06-11 13:53:12.205917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.300 [2024-06-11 13:53:12.205934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.206117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.300 [2024-06-11 13:53:12.206129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.206306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.300 [2024-06-11 13:53:12.206318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:19.300 [2024-06-11 13:53:12.206505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.300 [2024-06-11 13:53:12.206517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:19.300 passed 00:25:19.559 Test: blockdev nvme admin passthru ...passed 00:25:19.559 Test: blockdev copy ...passed 00:25:19.559 00:25:19.559 Run Summary: Type Total Ran Passed Failed Inactive 00:25:19.559 suites 1 1 n/a 0 0 00:25:19.559 tests 23 23 23 0 0 00:25:19.559 asserts 152 152 152 0 n/a 00:25:19.559 00:25:19.559 Elapsed time = 1.195 seconds 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:25:19.817 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.818 rmmod nvme_tcp 00:25:19.818 rmmod nvme_fabrics 00:25:19.818 rmmod nvme_keyring 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1473687 ']' 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1473687 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 1473687 ']' 00:25:19.818 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 1473687 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1473687 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1473687' 00:25:20.077 killing process with pid 1473687 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 1473687 00:25:20.077 13:53:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 1473687 00:25:20.336 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:20.336 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:20.336 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:20.336 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:20.595 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:20.595 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.595 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.595 13:53:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.498 13:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:22.498 00:25:22.498 real 0m12.178s 00:25:22.498 user 0m15.414s 00:25:22.498 sys 0m6.548s 00:25:22.498 13:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:22.498 13:53:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:22.498 ************************************ 00:25:22.498 END TEST nvmf_bdevio_no_huge 00:25:22.498 ************************************ 00:25:22.498 13:53:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:22.498 13:53:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:22.498 13:53:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:22.498 13:53:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:22.498 ************************************ 00:25:22.498 START TEST nvmf_tls 00:25:22.498 ************************************ 00:25:22.498 13:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:22.757 * Looking for test storage... 00:25:22.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.757 13:53:15 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:25:22.758 13:53:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.880 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.880 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.880 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.880 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:25:30.880 00:25:30.880 --- 10.0.0.2 ping statistics --- 00:25:30.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.880 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:25:30.880 00:25:30.880 --- 10.0.0.1 ping statistics --- 00:25:30.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.880 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1477842 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1477842 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1477842 ']' 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:30.880 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.881 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:30.881 13:53:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 [2024-06-11 13:53:22.739970] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:30.881 [2024-06-11 13:53:22.740031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.881 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.881 [2024-06-11 13:53:22.840627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.881 [2024-06-11 13:53:22.923956] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.881 [2024-06-11 13:53:22.924020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.881 [2024-06-11 13:53:22.924034] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.881 [2024-06-11 13:53:22.924046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.881 [2024-06-11 13:53:22.924056] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.881 [2024-06-11 13:53:22.924092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:25:30.881 13:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:31.139 true 00:25:31.139 13:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:31.139 13:53:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:25:31.139 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:25:31.139 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:25:31.139 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:31.398 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:31.398 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:25:31.656 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:25:31.656 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:25:31.656 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:31.915 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:31.915 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:25:32.174 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:25:32.174 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:25:32.174 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:32.174 13:53:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:25:32.433 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:25:32.433 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:25:32.433 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:32.693 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:32.693 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:25:32.693 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:25:32.693 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:25:32.693 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:32.988 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:32.988 13:53:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:33.263 13:53:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Bik4gMGAL6 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Vh9VfpHO3d 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Bik4gMGAL6 00:25:33.264 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Vh9VfpHO3d 00:25:33.521 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:33.521 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:34.088 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Bik4gMGAL6 00:25:34.088 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Bik4gMGAL6 00:25:34.088 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:34.088 [2024-06-11 13:53:26.908511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.088 13:53:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:34.346 13:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:34.604 [2024-06-11 13:53:27.361676] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.604 [2024-06-11 13:53:27.361923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.604 13:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:34.863 malloc0 00:25:34.863 13:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:35.121 13:53:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bik4gMGAL6 00:25:35.380 [2024-06-11 13:53:28.044774] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:35.380 13:53:28 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Bik4gMGAL6 00:25:35.380 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.347 Initializing NVMe Controllers 00:25:45.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:45.347 Initialization complete. Launching workers. 00:25:45.347 ======================================================== 00:25:45.347 Latency(us) 00:25:45.347 Device Information : IOPS MiB/s Average min max 00:25:45.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11620.19 45.39 5508.61 1121.08 6832.38 00:25:45.347 ======================================================== 00:25:45.347 Total : 11620.19 45.39 5508.61 1121.08 6832.38 00:25:45.347 00:25:45.347 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bik4gMGAL6 00:25:45.347 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:45.347 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bik4gMGAL6' 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1480393 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1480393 /var/tmp/bdevperf.sock 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1480393 ']' 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:45.348 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.348 [2024-06-11 13:53:38.237538] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:45.348 [2024-06-11 13:53:38.237607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480393 ] 00:25:45.606 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.606 [2024-06-11 13:53:38.315210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.606 [2024-06-11 13:53:38.387318] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.606 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:45.606 13:53:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:45.606 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bik4gMGAL6 00:25:45.865 [2024-06-11 13:53:38.683831] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:45.866 [2024-06-11 13:53:38.683906] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:45.866 TLSTESTn1 00:25:45.866 13:53:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:46.124 Running I/O for 10 seconds... 00:25:56.108 00:25:56.109 Latency(us) 00:25:56.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.109 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:56.109 Verification LBA range: start 0x0 length 0x2000 00:25:56.109 TLSTESTn1 : 10.03 4472.38 17.47 0.00 0.00 28567.17 4613.73 40055.60 00:25:56.109 =================================================================================================================== 00:25:56.109 Total : 4472.38 17.47 0.00 0.00 28567.17 4613.73 40055.60 00:25:56.109 0 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1480393 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1480393 ']' 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1480393 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:56.109 13:53:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1480393 00:25:56.109 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:56.109 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:56.109 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1480393' 00:25:56.109 killing process with pid 1480393 00:25:56.109 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1480393 00:25:56.109 Received shutdown signal, test time was about 10.000000 seconds 00:25:56.109 00:25:56.109 Latency(us) 00:25:56.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.109 =================================================================================================================== 00:25:56.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.109 [2024-06-11 13:53:49.015668] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:56.109 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1480393 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vh9VfpHO3d 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vh9VfpHO3d 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Vh9VfpHO3d 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Vh9VfpHO3d' 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482243 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482243 /var/tmp/bdevperf.sock 00:25:56.368 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482243 ']' 00:25:56.369 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.369 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:56.369 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.369 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:56.369 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.369 [2024-06-11 13:53:49.245561] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:56.369 [2024-06-11 13:53:49.245625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482243 ] 00:25:56.628 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.628 [2024-06-11 13:53:49.323516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.628 [2024-06-11 13:53:49.387633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.628 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:56.628 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:56.628 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Vh9VfpHO3d 00:25:56.887 [2024-06-11 13:53:49.692616] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:56.887 [2024-06-11 13:53:49.692696] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:56.888 [2024-06-11 13:53:49.697443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:56.888 [2024-06-11 13:53:49.698039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2553020 (107): Transport endpoint is not connected 00:25:56.888 [2024-06-11 13:53:49.699031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2553020 (9): Bad file descriptor 00:25:56.888 [2024-06-11 13:53:49.700032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.888 [2024-06-11 13:53:49.700045] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:56.888 [2024-06-11 13:53:49.700055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.888 request: 00:25:56.888 { 00:25:56.888 "name": "TLSTEST", 00:25:56.888 "trtype": "tcp", 00:25:56.888 "traddr": "10.0.0.2", 00:25:56.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.888 "adrfam": "ipv4", 00:25:56.888 "trsvcid": "4420", 00:25:56.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.888 "psk": "/tmp/tmp.Vh9VfpHO3d", 00:25:56.888 "method": "bdev_nvme_attach_controller", 00:25:56.888 "req_id": 1 00:25:56.888 } 00:25:56.888 Got JSON-RPC error response 00:25:56.888 response: 00:25:56.888 { 00:25:56.888 "code": -5, 00:25:56.888 "message": "Input/output error" 00:25:56.888 } 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1482243 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482243 ']' 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482243 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482243 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482243' 00:25:56.888 killing process with pid 1482243 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482243 00:25:56.888 Received shutdown signal, test time was about 10.000000 seconds 00:25:56.888 00:25:56.888 Latency(us) 00:25:56.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.888 =================================================================================================================== 00:25:56.888 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:56.888 [2024-06-11 13:53:49.786035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:56.888 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482243 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bik4gMGAL6 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bik4gMGAL6 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bik4gMGAL6 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bik4gMGAL6' 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482362 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482362 /var/tmp/bdevperf.sock 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482362 ']' 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:57.147 13:53:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:57.147 [2024-06-11 13:53:50.008284] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:57.147 [2024-06-11 13:53:50.008348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482362 ] 00:25:57.147 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.405 [2024-06-11 13:53:50.088296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.405 [2024-06-11 13:53:50.159233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.405 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:57.405 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:57.405 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Bik4gMGAL6 00:25:57.664 [2024-06-11 13:53:50.463891] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.664 [2024-06-11 13:53:50.463964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:57.664 [2024-06-11 13:53:50.468500] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:57.664 [2024-06-11 13:53:50.468533] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:57.664 [2024-06-11 13:53:50.468569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:57.664 [2024-06-11 13:53:50.469255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x967020 (107): Transport endpoint is not connected 00:25:57.664 [2024-06-11 13:53:50.470246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x967020 (9): Bad file descriptor 00:25:57.664 [2024-06-11 13:53:50.471247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.664 [2024-06-11 13:53:50.471259] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:57.664 [2024-06-11 13:53:50.471269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.664 request: 00:25:57.664 { 00:25:57.664 "name": "TLSTEST", 00:25:57.664 "trtype": "tcp", 00:25:57.664 "traddr": "10.0.0.2", 00:25:57.664 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:57.664 "adrfam": "ipv4", 00:25:57.664 "trsvcid": "4420", 00:25:57.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.664 "psk": "/tmp/tmp.Bik4gMGAL6", 00:25:57.664 "method": "bdev_nvme_attach_controller", 00:25:57.664 "req_id": 1 00:25:57.664 } 00:25:57.664 Got JSON-RPC error response 00:25:57.664 response: 00:25:57.664 { 00:25:57.664 "code": -5, 00:25:57.664 "message": "Input/output error" 00:25:57.664 } 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1482362 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482362 ']' 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482362 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482362 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482362' 00:25:57.664 killing process with pid 1482362 00:25:57.664 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482362 00:25:57.664 Received shutdown signal, test time was about 10.000000 seconds 00:25:57.664 00:25:57.664 Latency(us) 00:25:57.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.665 =================================================================================================================== 00:25:57.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:57.665 [2024-06-11 13:53:50.556881] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:57.665 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482362 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bik4gMGAL6 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bik4gMGAL6 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bik4gMGAL6 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bik4gMGAL6' 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482528 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482528 /var/tmp/bdevperf.sock 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482528 ']' 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:57.924 13:53:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:57.924 [2024-06-11 13:53:50.778769] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:57.924 [2024-06-11 13:53:50.778834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482528 ] 00:25:57.924 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.183 [2024-06-11 13:53:50.857587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.183 [2024-06-11 13:53:50.921663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.183 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:58.183 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:58.183 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bik4gMGAL6 00:25:58.442 [2024-06-11 13:53:51.222211] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:58.442 [2024-06-11 13:53:51.222297] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:58.442 [2024-06-11 13:53:51.229841] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:58.442 [2024-06-11 13:53:51.229872] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:58.442 [2024-06-11 13:53:51.229906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:58.442 [2024-06-11 13:53:51.230585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219b020 (107): Transport endpoint is not connected 00:25:58.442 [2024-06-11 13:53:51.231579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219b020 (9): Bad file descriptor 00:25:58.442 [2024-06-11 13:53:51.232580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:58.442 [2024-06-11 13:53:51.232592] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:58.442 [2024-06-11 13:53:51.232601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:58.442 request: 00:25:58.442 { 00:25:58.442 "name": "TLSTEST", 00:25:58.442 "trtype": "tcp", 00:25:58.442 "traddr": "10.0.0.2", 00:25:58.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:58.442 "adrfam": "ipv4", 00:25:58.442 "trsvcid": "4420", 00:25:58.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:58.442 "psk": "/tmp/tmp.Bik4gMGAL6", 00:25:58.442 "method": "bdev_nvme_attach_controller", 00:25:58.442 "req_id": 1 00:25:58.442 } 00:25:58.442 Got JSON-RPC error response 00:25:58.442 response: 00:25:58.442 { 00:25:58.442 "code": -5, 00:25:58.442 "message": "Input/output error" 00:25:58.442 } 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1482528 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482528 ']' 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482528 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482528 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482528' 00:25:58.442 killing process with pid 1482528 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482528 00:25:58.442 Received shutdown signal, test time was about 10.000000 seconds 00:25:58.442 00:25:58.442 Latency(us) 00:25:58.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.442 =================================================================================================================== 00:25:58.442 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:58.442 [2024-06-11 13:53:51.308810] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:58.442 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482528 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482609 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482609 /var/tmp/bdevperf.sock 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482609 ']' 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:58.700 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.701 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:58.701 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:58.701 [2024-06-11 13:53:51.534338] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:58.701 [2024-06-11 13:53:51.534404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482609 ] 00:25:58.701 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.959 [2024-06-11 13:53:51.613673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.959 [2024-06-11 13:53:51.680434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.959 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:58.959 13:53:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:58.959 13:53:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:59.217 [2024-06-11 13:53:51.991819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:59.217 [2024-06-11 13:53:51.993177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4c6d0 (9): Bad file descriptor 00:25:59.217 [2024-06-11 13:53:51.994175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.217 [2024-06-11 13:53:51.994189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:59.217 [2024-06-11 13:53:51.994198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.217 request: 00:25:59.217 { 00:25:59.217 "name": "TLSTEST", 00:25:59.217 "trtype": "tcp", 00:25:59.217 "traddr": "10.0.0.2", 00:25:59.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.217 "adrfam": "ipv4", 00:25:59.217 "trsvcid": "4420", 00:25:59.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.217 "method": "bdev_nvme_attach_controller", 00:25:59.217 "req_id": 1 00:25:59.217 } 00:25:59.217 Got JSON-RPC error response 00:25:59.217 response: 00:25:59.217 { 00:25:59.217 "code": -5, 00:25:59.217 "message": "Input/output error" 00:25:59.217 } 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1482609 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482609 ']' 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482609 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482609 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482609' 00:25:59.217 killing process with pid 1482609 00:25:59.217 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482609 00:25:59.217 Received shutdown signal, test time was about 10.000000 seconds 00:25:59.217 00:25:59.217 Latency(us) 00:25:59.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.217 =================================================================================================================== 00:25:59.218 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:59.218 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482609 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1477842 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1477842 ']' 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1477842 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1477842 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1477842' 00:25:59.475 killing process with pid 1477842 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1477842 00:25:59.475 [2024-06-11 13:53:52.308678] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:59.475 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1477842 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.vNuJp2CO3D 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.vNuJp2CO3D 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:59.733 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1482824 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1482824 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1482824 ']' 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:59.734 13:53:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:59.734 [2024-06-11 13:53:52.642879] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:25:59.734 [2024-06-11 13:53:52.642944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.992 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.992 [2024-06-11 13:53:52.741367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.992 [2024-06-11 13:53:52.826026] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.992 [2024-06-11 13:53:52.826068] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.993 [2024-06-11 13:53:52.826081] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.993 [2024-06-11 13:53:52.826093] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.993 [2024-06-11 13:53:52.826103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.993 [2024-06-11 13:53:52.826128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vNuJp2CO3D 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:00.927 [2024-06-11 13:53:53.793688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.927 13:53:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:01.186 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:01.444 [2024-06-11 13:53:54.238860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:01.444 [2024-06-11 13:53:54.239088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.444 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:01.703 malloc0 00:26:01.703 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:01.964 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:02.283 [2024-06-11 13:53:54.897758] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vNuJp2CO3D 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vNuJp2CO3D' 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1483287 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:02.283 13:53:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1483287 /var/tmp/bdevperf.sock 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1483287 ']' 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:02.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:02.284 13:53:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.284 [2024-06-11 13:53:54.961145] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:02.284 [2024-06-11 13:53:54.961209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483287 ] 00:26:02.284 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.284 [2024-06-11 13:53:55.039980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.284 [2024-06-11 13:53:55.109279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.543 13:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:02.543 13:53:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:02.543 13:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:02.543 [2024-06-11 13:53:55.409775] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.543 [2024-06-11 13:53:55.409860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:02.802 TLSTESTn1 00:26:02.802 13:53:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:02.802 Running I/O for 10 seconds... 00:26:12.784 00:26:12.784 Latency(us) 00:26:12.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.784 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:12.784 Verification LBA range: start 0x0 length 0x2000 00:26:12.784 TLSTESTn1 : 10.03 4442.39 17.35 0.00 0.00 28759.95 6579.81 44459.62 00:26:12.784 =================================================================================================================== 00:26:12.784 Total : 4442.39 17.35 0.00 0.00 28759.95 6579.81 44459.62 00:26:12.784 0 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1483287 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1483287 ']' 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1483287 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:12.784 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:13.043 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1483287 00:26:13.043 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:13.043 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:13.043 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1483287' 00:26:13.043 killing process with pid 1483287 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1483287 00:26:13.044 Received shutdown signal, test time was about 10.000000 seconds 00:26:13.044 00:26:13.044 Latency(us) 00:26:13.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.044 =================================================================================================================== 00:26:13.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.044 [2024-06-11 13:54:05.744449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1483287 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.vNuJp2CO3D 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vNuJp2CO3D 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vNuJp2CO3D 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vNuJp2CO3D 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vNuJp2CO3D' 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1485526 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1485526 /var/tmp/bdevperf.sock 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1485526 ']' 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:13.044 13:54:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.303 [2024-06-11 13:54:05.983695] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:13.303 [2024-06-11 13:54:05.983759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485526 ] 00:26:13.303 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.303 [2024-06-11 13:54:06.061742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.303 [2024-06-11 13:54:06.127015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:13.563 [2024-06-11 13:54:06.431629] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:13.563 [2024-06-11 13:54:06.431682] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:13.563 [2024-06-11 13:54:06.431691] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.vNuJp2CO3D 00:26:13.563 request: 00:26:13.563 { 00:26:13.563 "name": "TLSTEST", 00:26:13.563 "trtype": "tcp", 00:26:13.563 "traddr": "10.0.0.2", 00:26:13.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.563 "adrfam": "ipv4", 00:26:13.563 "trsvcid": "4420", 00:26:13.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.563 "psk": "/tmp/tmp.vNuJp2CO3D", 00:26:13.563 "method": "bdev_nvme_attach_controller", 00:26:13.563 "req_id": 1 00:26:13.563 } 00:26:13.563 Got JSON-RPC error response 00:26:13.563 response: 00:26:13.563 { 00:26:13.563 "code": -1, 00:26:13.563 "message": "Operation not permitted" 00:26:13.563 } 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1485526 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1485526 ']' 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1485526 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:13.563 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1485526 00:26:13.822 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:13.822 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:13.822 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1485526' 00:26:13.822 killing process with pid 1485526 00:26:13.822 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1485526 00:26:13.822 Received shutdown signal, test time was about 10.000000 seconds 00:26:13.822 00:26:13.822 Latency(us) 00:26:13.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.823 =================================================================================================================== 00:26:13.823 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1485526 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1482824 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1482824 ']' 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1482824 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:13.823 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1482824 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1482824' 00:26:14.082 killing process with pid 1482824 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1482824 00:26:14.082 [2024-06-11 13:54:06.751775] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1482824 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1485806 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1485806 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1485806 ']' 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:14.082 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.083 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:14.083 13:54:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.342 [2024-06-11 13:54:07.017533] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:14.342 [2024-06-11 13:54:07.017596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.342 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.342 [2024-06-11 13:54:07.115097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.342 [2024-06-11 13:54:07.199553] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.342 [2024-06-11 13:54:07.199598] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.342 [2024-06-11 13:54:07.199611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.342 [2024-06-11 13:54:07.199623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.342 [2024-06-11 13:54:07.199632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.342 [2024-06-11 13:54:07.199664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vNuJp2CO3D 00:26:15.281 13:54:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:15.281 [2024-06-11 13:54:08.183254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.540 13:54:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:15.540 13:54:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:15.799 [2024-06-11 13:54:08.632442] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:15.799 [2024-06-11 13:54:08.632678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.799 13:54:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:16.058 malloc0 00:26:16.058 13:54:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:16.317 13:54:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:16.577 [2024-06-11 13:54:09.339532] tcp.c:3581:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:16.577 [2024-06-11 13:54:09.339567] tcp.c:3667:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:16.577 [2024-06-11 13:54:09.339599] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:16.577 request: 00:26:16.577 { 00:26:16.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.577 "host": "nqn.2016-06.io.spdk:host1", 00:26:16.577 "psk": "/tmp/tmp.vNuJp2CO3D", 00:26:16.577 "method": "nvmf_subsystem_add_host", 00:26:16.577 "req_id": 1 00:26:16.577 } 00:26:16.577 Got JSON-RPC error response 00:26:16.577 response: 00:26:16.577 { 00:26:16.577 "code": -32603, 00:26:16.577 "message": "Internal error" 00:26:16.577 } 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1485806 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1485806 ']' 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1485806 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1485806 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1485806' 00:26:16.577 killing process with pid 1485806 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1485806 00:26:16.577 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1485806 00:26:16.836 13:54:09 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.vNuJp2CO3D 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1486327 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1486327 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1486327 ']' 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:16.837 13:54:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:16.837 [2024-06-11 13:54:09.709648] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:16.837 [2024-06-11 13:54:09.709709] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.097 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.097 [2024-06-11 13:54:09.807864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.097 [2024-06-11 13:54:09.885632] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.097 [2024-06-11 13:54:09.885679] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.097 [2024-06-11 13:54:09.885693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.097 [2024-06-11 13:54:09.885705] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.097 [2024-06-11 13:54:09.885715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.097 [2024-06-11 13:54:09.885743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vNuJp2CO3D 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:18.034 [2024-06-11 13:54:10.870454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.034 13:54:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:18.294 13:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:18.553 [2024-06-11 13:54:11.323636] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:18.553 [2024-06-11 13:54:11.323872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.553 13:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:18.812 malloc0 00:26:18.812 13:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:19.072 13:54:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:19.330 [2024-06-11 13:54:12.002735] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:19.330 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1486663 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1486663 /var/tmp/bdevperf.sock 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1486663 ']' 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:19.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:19.331 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:19.331 [2024-06-11 13:54:12.066984] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:19.331 [2024-06-11 13:54:12.067046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486663 ] 00:26:19.331 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.331 [2024-06-11 13:54:12.144379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.331 [2024-06-11 13:54:12.214355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.590 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:19.590 13:54:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:19.590 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:19.848 [2024-06-11 13:54:12.519815] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:19.848 [2024-06-11 13:54:12.519895] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:19.848 TLSTESTn1 00:26:19.848 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:20.107 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:26:20.107 "subsystems": [ 00:26:20.107 { 00:26:20.107 "subsystem": "keyring", 00:26:20.107 "config": [] 00:26:20.107 }, 00:26:20.107 { 00:26:20.107 "subsystem": "iobuf", 00:26:20.107 "config": [ 00:26:20.107 { 00:26:20.107 "method": "iobuf_set_options", 00:26:20.107 "params": { 00:26:20.108 "small_pool_count": 8192, 00:26:20.108 "large_pool_count": 1024, 00:26:20.108 "small_bufsize": 8192, 00:26:20.108 "large_bufsize": 135168 00:26:20.108 } 00:26:20.108 } 00:26:20.108 ] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "sock", 00:26:20.108 "config": [ 00:26:20.108 { 00:26:20.108 "method": "sock_set_default_impl", 00:26:20.108 "params": { 00:26:20.108 "impl_name": "posix" 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "sock_impl_set_options", 00:26:20.108 "params": { 00:26:20.108 "impl_name": "ssl", 00:26:20.108 "recv_buf_size": 4096, 00:26:20.108 "send_buf_size": 4096, 00:26:20.108 "enable_recv_pipe": true, 00:26:20.108 "enable_quickack": false, 00:26:20.108 "enable_placement_id": 0, 00:26:20.108 "enable_zerocopy_send_server": true, 00:26:20.108 "enable_zerocopy_send_client": false, 00:26:20.108 "zerocopy_threshold": 0, 00:26:20.108 "tls_version": 0, 00:26:20.108 "enable_ktls": false 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "sock_impl_set_options", 00:26:20.108 "params": { 00:26:20.108 "impl_name": "posix", 00:26:20.108 "recv_buf_size": 2097152, 00:26:20.108 "send_buf_size": 2097152, 00:26:20.108 "enable_recv_pipe": true, 00:26:20.108 "enable_quickack": false, 00:26:20.108 "enable_placement_id": 0, 00:26:20.108 "enable_zerocopy_send_server": true, 00:26:20.108 "enable_zerocopy_send_client": false, 00:26:20.108 "zerocopy_threshold": 0, 00:26:20.108 "tls_version": 0, 00:26:20.108 "enable_ktls": false 00:26:20.108 } 00:26:20.108 } 00:26:20.108 ] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "vmd", 00:26:20.108 "config": [] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "accel", 00:26:20.108 "config": [ 00:26:20.108 { 00:26:20.108 "method": "accel_set_options", 00:26:20.108 "params": { 00:26:20.108 "small_cache_size": 128, 00:26:20.108 "large_cache_size": 16, 00:26:20.108 "task_count": 2048, 00:26:20.108 "sequence_count": 2048, 00:26:20.108 "buf_count": 2048 00:26:20.108 } 00:26:20.108 } 00:26:20.108 ] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "bdev", 00:26:20.108 "config": [ 00:26:20.108 { 00:26:20.108 "method": "bdev_set_options", 00:26:20.108 "params": { 00:26:20.108 "bdev_io_pool_size": 65535, 00:26:20.108 "bdev_io_cache_size": 256, 00:26:20.108 "bdev_auto_examine": true, 00:26:20.108 "iobuf_small_cache_size": 128, 00:26:20.108 "iobuf_large_cache_size": 16 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_raid_set_options", 00:26:20.108 "params": { 00:26:20.108 "process_window_size_kb": 1024 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_iscsi_set_options", 00:26:20.108 "params": { 00:26:20.108 "timeout_sec": 30 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_nvme_set_options", 00:26:20.108 "params": { 00:26:20.108 "action_on_timeout": "none", 00:26:20.108 "timeout_us": 0, 00:26:20.108 "timeout_admin_us": 0, 00:26:20.108 "keep_alive_timeout_ms": 10000, 00:26:20.108 "arbitration_burst": 0, 00:26:20.108 "low_priority_weight": 0, 00:26:20.108 "medium_priority_weight": 0, 00:26:20.108 "high_priority_weight": 0, 00:26:20.108 "nvme_adminq_poll_period_us": 10000, 00:26:20.108 "nvme_ioq_poll_period_us": 0, 00:26:20.108 "io_queue_requests": 0, 00:26:20.108 "delay_cmd_submit": true, 00:26:20.108 "transport_retry_count": 4, 00:26:20.108 "bdev_retry_count": 3, 00:26:20.108 "transport_ack_timeout": 0, 00:26:20.108 "ctrlr_loss_timeout_sec": 0, 00:26:20.108 "reconnect_delay_sec": 0, 00:26:20.108 "fast_io_fail_timeout_sec": 0, 00:26:20.108 "disable_auto_failback": false, 00:26:20.108 "generate_uuids": false, 00:26:20.108 "transport_tos": 0, 00:26:20.108 "nvme_error_stat": false, 00:26:20.108 "rdma_srq_size": 0, 00:26:20.108 "io_path_stat": false, 00:26:20.108 "allow_accel_sequence": false, 00:26:20.108 "rdma_max_cq_size": 0, 00:26:20.108 "rdma_cm_event_timeout_ms": 0, 00:26:20.108 "dhchap_digests": [ 00:26:20.108 "sha256", 00:26:20.108 "sha384", 00:26:20.108 "sha512" 00:26:20.108 ], 00:26:20.108 "dhchap_dhgroups": [ 00:26:20.108 "null", 00:26:20.108 "ffdhe2048", 00:26:20.108 "ffdhe3072", 00:26:20.108 "ffdhe4096", 00:26:20.108 "ffdhe6144", 00:26:20.108 "ffdhe8192" 00:26:20.108 ] 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_nvme_set_hotplug", 00:26:20.108 "params": { 00:26:20.108 "period_us": 100000, 00:26:20.108 "enable": false 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_malloc_create", 00:26:20.108 "params": { 00:26:20.108 "name": "malloc0", 00:26:20.108 "num_blocks": 8192, 00:26:20.108 "block_size": 4096, 00:26:20.108 "physical_block_size": 4096, 00:26:20.108 "uuid": "925c40e2-cee9-4702-be7c-e2626b5a409d", 00:26:20.108 "optimal_io_boundary": 0 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "bdev_wait_for_examine" 00:26:20.108 } 00:26:20.108 ] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "nbd", 00:26:20.108 "config": [] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "scheduler", 00:26:20.108 "config": [ 00:26:20.108 { 00:26:20.108 "method": "framework_set_scheduler", 00:26:20.108 "params": { 00:26:20.108 "name": "static" 00:26:20.108 } 00:26:20.108 } 00:26:20.108 ] 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "subsystem": "nvmf", 00:26:20.108 "config": [ 00:26:20.108 { 00:26:20.108 "method": "nvmf_set_config", 00:26:20.108 "params": { 00:26:20.108 "discovery_filter": "match_any", 00:26:20.108 "admin_cmd_passthru": { 00:26:20.108 "identify_ctrlr": false 00:26:20.108 } 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "nvmf_set_max_subsystems", 00:26:20.108 "params": { 00:26:20.108 "max_subsystems": 1024 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "nvmf_set_crdt", 00:26:20.108 "params": { 00:26:20.108 "crdt1": 0, 00:26:20.108 "crdt2": 0, 00:26:20.108 "crdt3": 0 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "nvmf_create_transport", 00:26:20.108 "params": { 00:26:20.108 "trtype": "TCP", 00:26:20.108 "max_queue_depth": 128, 00:26:20.108 "max_io_qpairs_per_ctrlr": 127, 00:26:20.108 "in_capsule_data_size": 4096, 00:26:20.108 "max_io_size": 131072, 00:26:20.108 "io_unit_size": 131072, 00:26:20.108 "max_aq_depth": 128, 00:26:20.108 "num_shared_buffers": 511, 00:26:20.108 "buf_cache_size": 4294967295, 00:26:20.108 "dif_insert_or_strip": false, 00:26:20.108 "zcopy": false, 00:26:20.108 "c2h_success": false, 00:26:20.108 "sock_priority": 0, 00:26:20.108 "abort_timeout_sec": 1, 00:26:20.108 "ack_timeout": 0, 00:26:20.108 "data_wr_pool_size": 0 00:26:20.108 } 00:26:20.108 }, 00:26:20.108 { 00:26:20.108 "method": "nvmf_create_subsystem", 00:26:20.108 "params": { 00:26:20.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.108 "allow_any_host": false, 00:26:20.108 "serial_number": "SPDK00000000000001", 00:26:20.108 "model_number": "SPDK bdev Controller", 00:26:20.108 "max_namespaces": 10, 00:26:20.108 "min_cntlid": 1, 00:26:20.108 "max_cntlid": 65519, 00:26:20.108 "ana_reporting": false 00:26:20.108 } 00:26:20.109 }, 00:26:20.109 { 00:26:20.109 "method": "nvmf_subsystem_add_host", 00:26:20.109 "params": { 00:26:20.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.109 "host": "nqn.2016-06.io.spdk:host1", 00:26:20.109 "psk": "/tmp/tmp.vNuJp2CO3D" 00:26:20.109 } 00:26:20.109 }, 00:26:20.109 { 00:26:20.109 "method": "nvmf_subsystem_add_ns", 00:26:20.109 "params": { 00:26:20.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.109 "namespace": { 00:26:20.109 "nsid": 1, 00:26:20.109 "bdev_name": "malloc0", 00:26:20.109 "nguid": "925C40E2CEE94702BE7CE2626B5A409D", 00:26:20.109 "uuid": "925c40e2-cee9-4702-be7c-e2626b5a409d", 00:26:20.109 "no_auto_visible": false 00:26:20.109 } 00:26:20.109 } 00:26:20.109 }, 00:26:20.109 { 00:26:20.109 "method": "nvmf_subsystem_add_listener", 00:26:20.109 "params": { 00:26:20.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.109 "listen_address": { 00:26:20.109 "trtype": "TCP", 00:26:20.109 "adrfam": "IPv4", 00:26:20.109 "traddr": "10.0.0.2", 00:26:20.109 "trsvcid": "4420" 00:26:20.109 }, 00:26:20.109 "secure_channel": true 00:26:20.109 } 00:26:20.109 } 00:26:20.109 ] 00:26:20.109 } 00:26:20.109 ] 00:26:20.109 }' 00:26:20.109 13:54:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:20.368 13:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:26:20.368 "subsystems": [ 00:26:20.368 { 00:26:20.368 "subsystem": "keyring", 00:26:20.368 "config": [] 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "subsystem": "iobuf", 00:26:20.368 "config": [ 00:26:20.368 { 00:26:20.368 "method": "iobuf_set_options", 00:26:20.368 "params": { 00:26:20.368 "small_pool_count": 8192, 00:26:20.368 "large_pool_count": 1024, 00:26:20.368 "small_bufsize": 8192, 00:26:20.368 "large_bufsize": 135168 00:26:20.368 } 00:26:20.368 } 00:26:20.368 ] 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "subsystem": "sock", 00:26:20.368 "config": [ 00:26:20.368 { 00:26:20.368 "method": "sock_set_default_impl", 00:26:20.368 "params": { 00:26:20.368 "impl_name": "posix" 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "sock_impl_set_options", 00:26:20.368 "params": { 00:26:20.368 "impl_name": "ssl", 00:26:20.368 "recv_buf_size": 4096, 00:26:20.368 "send_buf_size": 4096, 00:26:20.368 "enable_recv_pipe": true, 00:26:20.368 "enable_quickack": false, 00:26:20.368 "enable_placement_id": 0, 00:26:20.368 "enable_zerocopy_send_server": true, 00:26:20.368 "enable_zerocopy_send_client": false, 00:26:20.368 "zerocopy_threshold": 0, 00:26:20.368 "tls_version": 0, 00:26:20.368 "enable_ktls": false 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "sock_impl_set_options", 00:26:20.368 "params": { 00:26:20.368 "impl_name": "posix", 00:26:20.368 "recv_buf_size": 2097152, 00:26:20.368 "send_buf_size": 2097152, 00:26:20.368 "enable_recv_pipe": true, 00:26:20.368 "enable_quickack": false, 00:26:20.368 "enable_placement_id": 0, 00:26:20.368 "enable_zerocopy_send_server": true, 00:26:20.368 "enable_zerocopy_send_client": false, 00:26:20.368 "zerocopy_threshold": 0, 00:26:20.368 "tls_version": 0, 00:26:20.368 "enable_ktls": false 00:26:20.368 } 00:26:20.368 } 00:26:20.368 ] 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "subsystem": "vmd", 00:26:20.368 "config": [] 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "subsystem": "accel", 00:26:20.368 "config": [ 00:26:20.368 { 00:26:20.368 "method": "accel_set_options", 00:26:20.368 "params": { 00:26:20.368 "small_cache_size": 128, 00:26:20.368 "large_cache_size": 16, 00:26:20.368 "task_count": 2048, 00:26:20.368 "sequence_count": 2048, 00:26:20.368 "buf_count": 2048 00:26:20.368 } 00:26:20.368 } 00:26:20.368 ] 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "subsystem": "bdev", 00:26:20.368 "config": [ 00:26:20.368 { 00:26:20.368 "method": "bdev_set_options", 00:26:20.368 "params": { 00:26:20.368 "bdev_io_pool_size": 65535, 00:26:20.368 "bdev_io_cache_size": 256, 00:26:20.368 "bdev_auto_examine": true, 00:26:20.368 "iobuf_small_cache_size": 128, 00:26:20.368 "iobuf_large_cache_size": 16 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "bdev_raid_set_options", 00:26:20.368 "params": { 00:26:20.368 "process_window_size_kb": 1024 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "bdev_iscsi_set_options", 00:26:20.368 "params": { 00:26:20.368 "timeout_sec": 30 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "bdev_nvme_set_options", 00:26:20.368 "params": { 00:26:20.368 "action_on_timeout": "none", 00:26:20.368 "timeout_us": 0, 00:26:20.368 "timeout_admin_us": 0, 00:26:20.368 "keep_alive_timeout_ms": 10000, 00:26:20.368 "arbitration_burst": 0, 00:26:20.368 "low_priority_weight": 0, 00:26:20.368 "medium_priority_weight": 0, 00:26:20.368 "high_priority_weight": 0, 00:26:20.368 "nvme_adminq_poll_period_us": 10000, 00:26:20.368 "nvme_ioq_poll_period_us": 0, 00:26:20.368 "io_queue_requests": 512, 00:26:20.368 "delay_cmd_submit": true, 00:26:20.368 "transport_retry_count": 4, 00:26:20.368 "bdev_retry_count": 3, 00:26:20.368 "transport_ack_timeout": 0, 00:26:20.368 "ctrlr_loss_timeout_sec": 0, 00:26:20.368 "reconnect_delay_sec": 0, 00:26:20.368 "fast_io_fail_timeout_sec": 0, 00:26:20.368 "disable_auto_failback": false, 00:26:20.368 "generate_uuids": false, 00:26:20.368 "transport_tos": 0, 00:26:20.368 "nvme_error_stat": false, 00:26:20.368 "rdma_srq_size": 0, 00:26:20.368 "io_path_stat": false, 00:26:20.368 "allow_accel_sequence": false, 00:26:20.368 "rdma_max_cq_size": 0, 00:26:20.368 "rdma_cm_event_timeout_ms": 0, 00:26:20.368 "dhchap_digests": [ 00:26:20.368 "sha256", 00:26:20.368 "sha384", 00:26:20.368 "sha512" 00:26:20.368 ], 00:26:20.368 "dhchap_dhgroups": [ 00:26:20.368 "null", 00:26:20.368 "ffdhe2048", 00:26:20.368 "ffdhe3072", 00:26:20.368 "ffdhe4096", 00:26:20.368 "ffdhe6144", 00:26:20.368 "ffdhe8192" 00:26:20.368 ] 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "bdev_nvme_attach_controller", 00:26:20.368 "params": { 00:26:20.368 "name": "TLSTEST", 00:26:20.368 "trtype": "TCP", 00:26:20.368 "adrfam": "IPv4", 00:26:20.368 "traddr": "10.0.0.2", 00:26:20.368 "trsvcid": "4420", 00:26:20.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.368 "prchk_reftag": false, 00:26:20.368 "prchk_guard": false, 00:26:20.368 "ctrlr_loss_timeout_sec": 0, 00:26:20.368 "reconnect_delay_sec": 0, 00:26:20.368 "fast_io_fail_timeout_sec": 0, 00:26:20.368 "psk": "/tmp/tmp.vNuJp2CO3D", 00:26:20.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:20.368 "hdgst": false, 00:26:20.368 "ddgst": false 00:26:20.368 } 00:26:20.368 }, 00:26:20.368 { 00:26:20.368 "method": "bdev_nvme_set_hotplug", 00:26:20.368 "params": { 00:26:20.369 "period_us": 100000, 00:26:20.369 "enable": false 00:26:20.369 } 00:26:20.369 }, 00:26:20.369 { 00:26:20.369 "method": "bdev_wait_for_examine" 00:26:20.369 } 00:26:20.369 ] 00:26:20.369 }, 00:26:20.369 { 00:26:20.369 "subsystem": "nbd", 00:26:20.369 "config": [] 00:26:20.369 } 00:26:20.369 ] 00:26:20.369 }' 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1486663 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1486663 ']' 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1486663 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:20.369 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1486663 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1486663' 00:26:20.628 killing process with pid 1486663 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1486663 00:26:20.628 Received shutdown signal, test time was about 10.000000 seconds 00:26:20.628 00:26:20.628 Latency(us) 00:26:20.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.628 =================================================================================================================== 00:26:20.628 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:20.628 [2024-06-11 13:54:13.295005] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1486663 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1486327 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1486327 ']' 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1486327 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1486327 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1486327' 00:26:20.628 killing process with pid 1486327 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1486327 00:26:20.628 [2024-06-11 13:54:13.532784] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:20.628 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1486327 00:26:20.889 13:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:20.889 13:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:20.889 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:20.889 13:54:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:26:20.889 "subsystems": [ 00:26:20.889 { 00:26:20.889 "subsystem": "keyring", 00:26:20.889 "config": [] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "iobuf", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "iobuf_set_options", 00:26:20.889 "params": { 00:26:20.889 "small_pool_count": 8192, 00:26:20.889 "large_pool_count": 1024, 00:26:20.889 "small_bufsize": 8192, 00:26:20.889 "large_bufsize": 135168 00:26:20.889 } 00:26:20.889 } 00:26:20.889 ] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "sock", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "sock_set_default_impl", 00:26:20.889 "params": { 00:26:20.889 "impl_name": "posix" 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "sock_impl_set_options", 00:26:20.889 "params": { 00:26:20.889 "impl_name": "ssl", 00:26:20.889 "recv_buf_size": 4096, 00:26:20.889 "send_buf_size": 4096, 00:26:20.889 "enable_recv_pipe": true, 00:26:20.889 "enable_quickack": false, 00:26:20.889 "enable_placement_id": 0, 00:26:20.889 "enable_zerocopy_send_server": true, 00:26:20.889 "enable_zerocopy_send_client": false, 00:26:20.889 "zerocopy_threshold": 0, 00:26:20.889 "tls_version": 0, 00:26:20.889 "enable_ktls": false 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "sock_impl_set_options", 00:26:20.889 "params": { 00:26:20.889 "impl_name": "posix", 00:26:20.889 "recv_buf_size": 2097152, 00:26:20.889 "send_buf_size": 2097152, 00:26:20.889 "enable_recv_pipe": true, 00:26:20.889 "enable_quickack": false, 00:26:20.889 "enable_placement_id": 0, 00:26:20.889 "enable_zerocopy_send_server": true, 00:26:20.889 "enable_zerocopy_send_client": false, 00:26:20.889 "zerocopy_threshold": 0, 00:26:20.889 "tls_version": 0, 00:26:20.889 "enable_ktls": false 00:26:20.889 } 00:26:20.889 } 00:26:20.889 ] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "vmd", 00:26:20.889 "config": [] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "accel", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "accel_set_options", 00:26:20.889 "params": { 00:26:20.889 "small_cache_size": 128, 00:26:20.889 "large_cache_size": 16, 00:26:20.889 "task_count": 2048, 00:26:20.889 "sequence_count": 2048, 00:26:20.889 "buf_count": 2048 00:26:20.889 } 00:26:20.889 } 00:26:20.889 ] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "bdev", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "bdev_set_options", 00:26:20.889 "params": { 00:26:20.889 "bdev_io_pool_size": 65535, 00:26:20.889 "bdev_io_cache_size": 256, 00:26:20.889 "bdev_auto_examine": true, 00:26:20.889 "iobuf_small_cache_size": 128, 00:26:20.889 "iobuf_large_cache_size": 16 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_raid_set_options", 00:26:20.889 "params": { 00:26:20.889 "process_window_size_kb": 1024 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_iscsi_set_options", 00:26:20.889 "params": { 00:26:20.889 "timeout_sec": 30 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_nvme_set_options", 00:26:20.889 "params": { 00:26:20.889 "action_on_timeout": "none", 00:26:20.889 "timeout_us": 0, 00:26:20.889 "timeout_admin_us": 0, 00:26:20.889 "keep_alive_timeout_ms": 10000, 00:26:20.889 "arbitration_burst": 0, 00:26:20.889 "low_priority_weight": 0, 00:26:20.889 "medium_priority_weight": 0, 00:26:20.889 "high_priority_weight": 0, 00:26:20.889 "nvme_adminq_poll_period_us": 10000, 00:26:20.889 "nvme_ioq_poll_period_us": 0, 00:26:20.889 "io_queue_requests": 0, 00:26:20.889 "delay_cmd_submit": true, 00:26:20.889 "transport_retry_count": 4, 00:26:20.889 "bdev_retry_count": 3, 00:26:20.889 "transport_ack_timeout": 0, 00:26:20.889 "ctrlr_loss_timeout_sec": 0, 00:26:20.889 "reconnect_delay_sec": 0, 00:26:20.889 "fast_io_fail_timeout_sec": 0, 00:26:20.889 "disable_auto_failback": false, 00:26:20.889 "generate_uuids": false, 00:26:20.889 "transport_tos": 0, 00:26:20.889 "nvme_error_stat": false, 00:26:20.889 "rdma_srq_size": 0, 00:26:20.889 "io_path_stat": false, 00:26:20.889 "allow_accel_sequence": false, 00:26:20.889 "rdma_max_cq_size": 0, 00:26:20.889 "rdma_cm_event_timeout_ms": 0, 00:26:20.889 "dhchap_digests": [ 00:26:20.889 "sha256", 00:26:20.889 "sha384", 00:26:20.889 "sha512" 00:26:20.889 ], 00:26:20.889 "dhchap_dhgroups": [ 00:26:20.889 "null", 00:26:20.889 "ffdhe2048", 00:26:20.889 "ffdhe3072", 00:26:20.889 "ffdhe4096", 00:26:20.889 "ffdhe6144", 00:26:20.889 "ffdhe8192" 00:26:20.889 ] 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_nvme_set_hotplug", 00:26:20.889 "params": { 00:26:20.889 "period_us": 100000, 00:26:20.889 "enable": false 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_malloc_create", 00:26:20.889 "params": { 00:26:20.889 "name": "malloc0", 00:26:20.889 "num_blocks": 8192, 00:26:20.889 "block_size": 4096, 00:26:20.889 "physical_block_size": 4096, 00:26:20.889 "uuid": "925c40e2-cee9-4702-be7c-e2626b5a409d", 00:26:20.889 "optimal_io_boundary": 0 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "bdev_wait_for_examine" 00:26:20.889 } 00:26:20.889 ] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "nbd", 00:26:20.889 "config": [] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "scheduler", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "framework_set_scheduler", 00:26:20.889 "params": { 00:26:20.889 "name": "static" 00:26:20.889 } 00:26:20.889 } 00:26:20.889 ] 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "subsystem": "nvmf", 00:26:20.889 "config": [ 00:26:20.889 { 00:26:20.889 "method": "nvmf_set_config", 00:26:20.889 "params": { 00:26:20.889 "discovery_filter": "match_any", 00:26:20.889 "admin_cmd_passthru": { 00:26:20.889 "identify_ctrlr": false 00:26:20.889 } 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "nvmf_set_max_subsystems", 00:26:20.889 "params": { 00:26:20.889 "max_subsystems": 1024 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "nvmf_set_crdt", 00:26:20.889 "params": { 00:26:20.889 "crdt1": 0, 00:26:20.889 "crdt2": 0, 00:26:20.889 "crdt3": 0 00:26:20.889 } 00:26:20.889 }, 00:26:20.889 { 00:26:20.889 "method": "nvmf_create_transport", 00:26:20.889 "params": { 00:26:20.889 "trtype": "TCP", 00:26:20.889 "max_queue_depth": 128, 00:26:20.890 "max_io_qpairs_per_ctrlr": 127, 00:26:20.890 "in_capsule_data_size": 4096, 00:26:20.890 "max_io_size": 131072, 00:26:20.890 "io_unit_size": 131072, 00:26:20.890 "max_aq_depth": 128, 00:26:20.890 "num_shared_buffers": 511, 00:26:20.890 "buf_cache_size": 4294967295, 00:26:20.890 "dif_insert_or_strip": false, 00:26:20.890 "zcopy": false, 00:26:20.890 "c2h_success": false, 00:26:20.890 "sock_priority": 0, 00:26:20.890 "abort_timeout_sec": 1, 00:26:20.890 "ack_timeout": 0, 00:26:20.890 "data_wr_pool_size": 0 00:26:20.890 } 00:26:20.890 }, 00:26:20.890 { 00:26:20.890 "method": "nvmf_create_subsystem", 00:26:20.890 "params": { 00:26:20.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.890 "allow_any_host": false, 00:26:20.890 "serial_number": "SPDK00000000000001", 00:26:20.890 "model_number": "SPDK bdev Controller", 00:26:20.890 "max_namespaces": 10, 00:26:20.890 "min_cntlid": 1, 00:26:20.890 "max_cntlid": 65519, 00:26:20.890 "ana_reporting": false 00:26:20.890 } 00:26:20.890 }, 00:26:20.890 { 00:26:20.890 "method": "nvmf_subsystem_add_host", 00:26:20.890 "params": { 00:26:20.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.890 "host": "nqn.2016-06.io.spdk:host1", 00:26:20.890 "psk": "/tmp/tmp.vNuJp2CO3D" 00:26:20.890 } 00:26:20.890 }, 00:26:20.890 { 00:26:20.890 "method": "nvmf_subsystem_add_ns", 00:26:20.890 "params": { 00:26:20.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.890 "namespace": { 00:26:20.890 "nsid": 1, 00:26:20.890 "bdev_name": "malloc0", 00:26:20.890 "nguid": "925C40E2CEE94702BE7CE2626B5A409D", 00:26:20.890 "uuid": "925c40e2-cee9-4702-be7c-e2626b5a409d", 00:26:20.890 "no_auto_visible": false 00:26:20.890 } 00:26:20.890 } 00:26:20.890 }, 00:26:20.890 { 00:26:20.890 "method": "nvmf_subsystem_add_listener", 00:26:20.890 "params": { 00:26:20.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.890 "listen_address": { 00:26:20.890 "trtype": "TCP", 00:26:20.890 "adrfam": "IPv4", 00:26:20.890 "traddr": "10.0.0.2", 00:26:20.890 "trsvcid": "4420" 00:26:20.890 }, 00:26:20.890 "secure_channel": true 00:26:20.890 } 00:26:20.890 } 00:26:20.890 ] 00:26:20.890 } 00:26:20.890 ] 00:26:20.890 }' 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1486963 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1486963 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1486963 ']' 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:20.890 13:54:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:21.150 [2024-06-11 13:54:13.801886] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:21.150 [2024-06-11 13:54:13.801948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.150 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.150 [2024-06-11 13:54:13.898197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.150 [2024-06-11 13:54:13.982844] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.150 [2024-06-11 13:54:13.982888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.150 [2024-06-11 13:54:13.982901] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.150 [2024-06-11 13:54:13.982913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.150 [2024-06-11 13:54:13.982923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.150 [2024-06-11 13:54:13.983000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.409 [2024-06-11 13:54:14.191195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.409 [2024-06-11 13:54:14.207141] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:21.409 [2024-06-11 13:54:14.223196] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:21.409 [2024-06-11 13:54:14.232840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1487242 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1487242 /var/tmp/bdevperf.sock 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1487242 ']' 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:21.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:21.978 13:54:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:26:21.978 "subsystems": [ 00:26:21.978 { 00:26:21.978 "subsystem": "keyring", 00:26:21.978 "config": [] 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "subsystem": "iobuf", 00:26:21.978 "config": [ 00:26:21.978 { 00:26:21.978 "method": "iobuf_set_options", 00:26:21.978 "params": { 00:26:21.978 "small_pool_count": 8192, 00:26:21.978 "large_pool_count": 1024, 00:26:21.978 "small_bufsize": 8192, 00:26:21.978 "large_bufsize": 135168 00:26:21.978 } 00:26:21.978 } 00:26:21.978 ] 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "subsystem": "sock", 00:26:21.978 "config": [ 00:26:21.978 { 00:26:21.978 "method": "sock_set_default_impl", 00:26:21.978 "params": { 00:26:21.978 "impl_name": "posix" 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "sock_impl_set_options", 00:26:21.978 "params": { 00:26:21.978 "impl_name": "ssl", 00:26:21.978 "recv_buf_size": 4096, 00:26:21.978 "send_buf_size": 4096, 00:26:21.978 "enable_recv_pipe": true, 00:26:21.978 "enable_quickack": false, 00:26:21.978 "enable_placement_id": 0, 00:26:21.978 "enable_zerocopy_send_server": true, 00:26:21.978 "enable_zerocopy_send_client": false, 00:26:21.978 "zerocopy_threshold": 0, 00:26:21.978 "tls_version": 0, 00:26:21.978 "enable_ktls": false 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "sock_impl_set_options", 00:26:21.978 "params": { 00:26:21.978 "impl_name": "posix", 00:26:21.978 "recv_buf_size": 2097152, 00:26:21.978 "send_buf_size": 2097152, 00:26:21.978 "enable_recv_pipe": true, 00:26:21.978 "enable_quickack": false, 00:26:21.978 "enable_placement_id": 0, 00:26:21.978 "enable_zerocopy_send_server": true, 00:26:21.978 "enable_zerocopy_send_client": false, 00:26:21.978 "zerocopy_threshold": 0, 00:26:21.978 "tls_version": 0, 00:26:21.978 "enable_ktls": false 00:26:21.978 } 00:26:21.978 } 00:26:21.978 ] 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "subsystem": "vmd", 00:26:21.978 "config": [] 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "subsystem": "accel", 00:26:21.978 "config": [ 00:26:21.978 { 00:26:21.978 "method": "accel_set_options", 00:26:21.978 "params": { 00:26:21.978 "small_cache_size": 128, 00:26:21.978 "large_cache_size": 16, 00:26:21.978 "task_count": 2048, 00:26:21.978 "sequence_count": 2048, 00:26:21.978 "buf_count": 2048 00:26:21.978 } 00:26:21.978 } 00:26:21.978 ] 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "subsystem": "bdev", 00:26:21.978 "config": [ 00:26:21.978 { 00:26:21.978 "method": "bdev_set_options", 00:26:21.978 "params": { 00:26:21.978 "bdev_io_pool_size": 65535, 00:26:21.978 "bdev_io_cache_size": 256, 00:26:21.978 "bdev_auto_examine": true, 00:26:21.978 "iobuf_small_cache_size": 128, 00:26:21.978 "iobuf_large_cache_size": 16 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_raid_set_options", 00:26:21.978 "params": { 00:26:21.978 "process_window_size_kb": 1024 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_iscsi_set_options", 00:26:21.978 "params": { 00:26:21.978 "timeout_sec": 30 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_nvme_set_options", 00:26:21.978 "params": { 00:26:21.978 "action_on_timeout": "none", 00:26:21.978 "timeout_us": 0, 00:26:21.978 "timeout_admin_us": 0, 00:26:21.978 "keep_alive_timeout_ms": 10000, 00:26:21.978 "arbitration_burst": 0, 00:26:21.978 "low_priority_weight": 0, 00:26:21.978 "medium_priority_weight": 0, 00:26:21.978 "high_priority_weight": 0, 00:26:21.978 "nvme_adminq_poll_period_us": 10000, 00:26:21.978 "nvme_ioq_poll_period_us": 0, 00:26:21.978 "io_queue_requests": 512, 00:26:21.978 "delay_cmd_submit": true, 00:26:21.978 "transport_retry_count": 4, 00:26:21.978 "bdev_retry_count": 3, 00:26:21.978 "transport_ack_timeout": 0, 00:26:21.978 "ctrlr_loss_timeout_sec": 0, 00:26:21.978 "reconnect_delay_sec": 0, 00:26:21.978 "fast_io_fail_timeout_sec": 0, 00:26:21.978 "disable_auto_failback": false, 00:26:21.978 "generate_uuids": false, 00:26:21.978 "transport_tos": 0, 00:26:21.978 "nvme_error_stat": false, 00:26:21.978 "rdma_srq_size": 0, 00:26:21.978 "io_path_stat": false, 00:26:21.978 "allow_accel_sequence": false, 00:26:21.978 "rdma_max_cq_size": 0, 00:26:21.978 "rdma_cm_event_timeout_ms": 0, 00:26:21.978 "dhchap_digests": [ 00:26:21.978 "sha256", 00:26:21.978 "sha384", 00:26:21.978 "sha512" 00:26:21.978 ], 00:26:21.978 "dhchap_dhgroups": [ 00:26:21.978 "null", 00:26:21.978 "ffdhe2048", 00:26:21.978 "ffdhe3072", 00:26:21.978 "ffdhe4096", 00:26:21.978 "ffdhe6144", 00:26:21.978 "ffdhe8192" 00:26:21.978 ] 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_nvme_attach_controller", 00:26:21.978 "params": { 00:26:21.978 "name": "TLSTEST", 00:26:21.978 "trtype": "TCP", 00:26:21.978 "adrfam": "IPv4", 00:26:21.978 "traddr": "10.0.0.2", 00:26:21.978 "trsvcid": "4420", 00:26:21.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.978 "prchk_reftag": false, 00:26:21.978 "prchk_guard": false, 00:26:21.978 "ctrlr_loss_timeout_sec": 0, 00:26:21.978 "reconnect_delay_sec": 0, 00:26:21.978 "fast_io_fail_timeout_sec": 0, 00:26:21.978 "psk": "/tmp/tmp.vNuJp2CO3D", 00:26:21.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.978 "hdgst": false, 00:26:21.978 "ddgst": false 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_nvme_set_hotplug", 00:26:21.978 "params": { 00:26:21.978 "period_us": 100000, 00:26:21.978 "enable": false 00:26:21.978 } 00:26:21.978 }, 00:26:21.978 { 00:26:21.978 "method": "bdev_wait_for_examine" 00:26:21.978 } 00:26:21.978 ] 00:26:21.979 }, 00:26:21.979 { 00:26:21.979 "subsystem": "nbd", 00:26:21.979 "config": [] 00:26:21.979 } 00:26:21.979 ] 00:26:21.979 }' 00:26:21.979 13:54:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:21.979 [2024-06-11 13:54:14.798323] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:21.979 [2024-06-11 13:54:14.798397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487242 ] 00:26:21.979 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.979 [2024-06-11 13:54:14.876286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.238 [2024-06-11 13:54:14.946675] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.238 [2024-06-11 13:54:15.089437] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:22.238 [2024-06-11 13:54:15.089525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:22.807 13:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:22.807 13:54:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:22.807 13:54:15 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:23.066 Running I/O for 10 seconds... 00:26:33.049 00:26:33.049 Latency(us) 00:26:33.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.049 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:33.049 Verification LBA range: start 0x0 length 0x2000 00:26:33.049 TLSTESTn1 : 10.03 4431.10 17.31 0.00 0.00 28834.25 6763.32 50121.93 00:26:33.049 =================================================================================================================== 00:26:33.049 Total : 4431.10 17.31 0.00 0.00 28834.25 6763.32 50121.93 00:26:33.049 0 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1487242 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1487242 ']' 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1487242 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1487242 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1487242' 00:26:33.049 killing process with pid 1487242 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1487242 00:26:33.049 Received shutdown signal, test time was about 10.000000 seconds 00:26:33.049 00:26:33.049 Latency(us) 00:26:33.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.049 =================================================================================================================== 00:26:33.049 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.049 [2024-06-11 13:54:25.952083] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:33.049 13:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1487242 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1486963 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1486963 ']' 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1486963 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1486963 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1486963' 00:26:33.309 killing process with pid 1486963 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1486963 00:26:33.309 [2024-06-11 13:54:26.191544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:33.309 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1486963 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1489114 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1489114 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1489114 ']' 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:33.568 13:54:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.568 [2024-06-11 13:54:26.460443] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:33.568 [2024-06-11 13:54:26.460511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.828 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.828 [2024-06-11 13:54:26.567231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.828 [2024-06-11 13:54:26.654979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.828 [2024-06-11 13:54:26.655018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.828 [2024-06-11 13:54:26.655031] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.828 [2024-06-11 13:54:26.655043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.828 [2024-06-11 13:54:26.655053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.828 [2024-06-11 13:54:26.655086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.vNuJp2CO3D 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vNuJp2CO3D 00:26:34.823 13:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:34.824 [2024-06-11 13:54:27.620073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.824 13:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:35.098 13:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:35.356 [2024-06-11 13:54:28.069249] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:35.356 [2024-06-11 13:54:28.069498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.356 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:35.615 malloc0 00:26:35.615 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vNuJp2CO3D 00:26:35.874 [2024-06-11 13:54:28.736259] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1489453 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1489453 /var/tmp/bdevperf.sock 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1489453 ']' 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:35.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:35.874 13:54:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:36.132 [2024-06-11 13:54:28.804266] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:36.132 [2024-06-11 13:54:28.804330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489453 ] 00:26:36.132 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.132 [2024-06-11 13:54:28.896002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.132 [2024-06-11 13:54:28.977774] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.065 13:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:37.065 13:54:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:37.065 13:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vNuJp2CO3D 00:26:37.065 13:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:37.323 [2024-06-11 13:54:30.134880] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:37.323 nvme0n1 00:26:37.323 13:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:37.581 Running I/O for 1 seconds... 00:26:38.514 00:26:38.514 Latency(us) 00:26:38.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.514 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:38.514 Verification LBA range: start 0x0 length 0x2000 00:26:38.514 nvme0n1 : 1.02 4008.61 15.66 0.00 0.00 31576.52 6422.53 65850.57 00:26:38.514 =================================================================================================================== 00:26:38.514 Total : 4008.61 15.66 0.00 0.00 31576.52 6422.53 65850.57 00:26:38.514 0 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1489453 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1489453 ']' 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1489453 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1489453 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1489453' 00:26:38.514 killing process with pid 1489453 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1489453 00:26:38.514 Received shutdown signal, test time was about 1.000000 seconds 00:26:38.514 00:26:38.514 Latency(us) 00:26:38.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.514 =================================================================================================================== 00:26:38.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.514 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1489453 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1489114 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1489114 ']' 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1489114 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1489114 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1489114' 00:26:38.772 killing process with pid 1489114 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1489114 00:26:38.772 [2024-06-11 13:54:31.676619] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:38.772 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1489114 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1489967 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1489967 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1489967 ']' 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:39.030 13:54:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:39.289 [2024-06-11 13:54:31.945987] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:39.289 [2024-06-11 13:54:31.946056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.289 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.289 [2024-06-11 13:54:32.056371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.289 [2024-06-11 13:54:32.135573] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.289 [2024-06-11 13:54:32.135622] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.289 [2024-06-11 13:54:32.135635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.289 [2024-06-11 13:54:32.135647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.289 [2024-06-11 13:54:32.135657] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.289 [2024-06-11 13:54:32.135693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:40.222 [2024-06-11 13:54:32.904288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.222 malloc0 00:26:40.222 [2024-06-11 13:54:32.933512] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:40.222 [2024-06-11 13:54:32.933750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1490238 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1490238 /var/tmp/bdevperf.sock 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1490238 ']' 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:40.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:40.222 13:54:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:40.222 [2024-06-11 13:54:33.009786] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:40.222 [2024-06-11 13:54:33.009844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490238 ] 00:26:40.222 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.222 [2024-06-11 13:54:33.102491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.480 [2024-06-11 13:54:33.188311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.045 13:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:41.045 13:54:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:41.045 13:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vNuJp2CO3D 00:26:41.303 13:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:41.561 [2024-06-11 13:54:34.352362] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:41.561 nvme0n1 00:26:41.561 13:54:34 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:41.819 Running I/O for 1 seconds... 00:26:42.753 00:26:42.753 Latency(us) 00:26:42.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.753 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:42.753 Verification LBA range: start 0x0 length 0x2000 00:26:42.753 nvme0n1 : 1.04 3486.67 13.62 0.00 0.00 36076.93 7340.03 58720.26 00:26:42.753 =================================================================================================================== 00:26:42.753 Total : 3486.67 13.62 0.00 0.00 36076.93 7340.03 58720.26 00:26:42.753 0 00:26:42.753 13:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:26:42.753 13:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.753 13:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:43.011 13:54:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.011 13:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:26:43.011 "subsystems": [ 00:26:43.011 { 00:26:43.011 "subsystem": "keyring", 00:26:43.011 "config": [ 00:26:43.011 { 00:26:43.011 "method": "keyring_file_add_key", 00:26:43.012 "params": { 00:26:43.012 "name": "key0", 00:26:43.012 "path": "/tmp/tmp.vNuJp2CO3D" 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "iobuf", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "iobuf_set_options", 00:26:43.012 "params": { 00:26:43.012 "small_pool_count": 8192, 00:26:43.012 "large_pool_count": 1024, 00:26:43.012 "small_bufsize": 8192, 00:26:43.012 "large_bufsize": 135168 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "sock", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "sock_set_default_impl", 00:26:43.012 "params": { 00:26:43.012 "impl_name": "posix" 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "sock_impl_set_options", 00:26:43.012 "params": { 00:26:43.012 "impl_name": "ssl", 00:26:43.012 "recv_buf_size": 4096, 00:26:43.012 "send_buf_size": 4096, 00:26:43.012 "enable_recv_pipe": true, 00:26:43.012 "enable_quickack": false, 00:26:43.012 "enable_placement_id": 0, 00:26:43.012 "enable_zerocopy_send_server": true, 00:26:43.012 "enable_zerocopy_send_client": false, 00:26:43.012 "zerocopy_threshold": 0, 00:26:43.012 "tls_version": 0, 00:26:43.012 "enable_ktls": false 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "sock_impl_set_options", 00:26:43.012 "params": { 00:26:43.012 "impl_name": "posix", 00:26:43.012 "recv_buf_size": 2097152, 00:26:43.012 "send_buf_size": 2097152, 00:26:43.012 "enable_recv_pipe": true, 00:26:43.012 "enable_quickack": false, 00:26:43.012 "enable_placement_id": 0, 00:26:43.012 "enable_zerocopy_send_server": true, 00:26:43.012 "enable_zerocopy_send_client": false, 00:26:43.012 "zerocopy_threshold": 0, 00:26:43.012 "tls_version": 0, 00:26:43.012 "enable_ktls": false 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "vmd", 00:26:43.012 "config": [] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "accel", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "accel_set_options", 00:26:43.012 "params": { 00:26:43.012 "small_cache_size": 128, 00:26:43.012 "large_cache_size": 16, 00:26:43.012 "task_count": 2048, 00:26:43.012 "sequence_count": 2048, 00:26:43.012 "buf_count": 2048 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "bdev", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "bdev_set_options", 00:26:43.012 "params": { 00:26:43.012 "bdev_io_pool_size": 65535, 00:26:43.012 "bdev_io_cache_size": 256, 00:26:43.012 "bdev_auto_examine": true, 00:26:43.012 "iobuf_small_cache_size": 128, 00:26:43.012 "iobuf_large_cache_size": 16 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_raid_set_options", 00:26:43.012 "params": { 00:26:43.012 "process_window_size_kb": 1024 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_iscsi_set_options", 00:26:43.012 "params": { 00:26:43.012 "timeout_sec": 30 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_nvme_set_options", 00:26:43.012 "params": { 00:26:43.012 "action_on_timeout": "none", 00:26:43.012 "timeout_us": 0, 00:26:43.012 "timeout_admin_us": 0, 00:26:43.012 "keep_alive_timeout_ms": 10000, 00:26:43.012 "arbitration_burst": 0, 00:26:43.012 "low_priority_weight": 0, 00:26:43.012 "medium_priority_weight": 0, 00:26:43.012 "high_priority_weight": 0, 00:26:43.012 "nvme_adminq_poll_period_us": 10000, 00:26:43.012 "nvme_ioq_poll_period_us": 0, 00:26:43.012 "io_queue_requests": 0, 00:26:43.012 "delay_cmd_submit": true, 00:26:43.012 "transport_retry_count": 4, 00:26:43.012 "bdev_retry_count": 3, 00:26:43.012 "transport_ack_timeout": 0, 00:26:43.012 "ctrlr_loss_timeout_sec": 0, 00:26:43.012 "reconnect_delay_sec": 0, 00:26:43.012 "fast_io_fail_timeout_sec": 0, 00:26:43.012 "disable_auto_failback": false, 00:26:43.012 "generate_uuids": false, 00:26:43.012 "transport_tos": 0, 00:26:43.012 "nvme_error_stat": false, 00:26:43.012 "rdma_srq_size": 0, 00:26:43.012 "io_path_stat": false, 00:26:43.012 "allow_accel_sequence": false, 00:26:43.012 "rdma_max_cq_size": 0, 00:26:43.012 "rdma_cm_event_timeout_ms": 0, 00:26:43.012 "dhchap_digests": [ 00:26:43.012 "sha256", 00:26:43.012 "sha384", 00:26:43.012 "sha512" 00:26:43.012 ], 00:26:43.012 "dhchap_dhgroups": [ 00:26:43.012 "null", 00:26:43.012 "ffdhe2048", 00:26:43.012 "ffdhe3072", 00:26:43.012 "ffdhe4096", 00:26:43.012 "ffdhe6144", 00:26:43.012 "ffdhe8192" 00:26:43.012 ] 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_nvme_set_hotplug", 00:26:43.012 "params": { 00:26:43.012 "period_us": 100000, 00:26:43.012 "enable": false 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_malloc_create", 00:26:43.012 "params": { 00:26:43.012 "name": "malloc0", 00:26:43.012 "num_blocks": 8192, 00:26:43.012 "block_size": 4096, 00:26:43.012 "physical_block_size": 4096, 00:26:43.012 "uuid": "ab5b8807-8358-4020-8eca-ae2b5f660fc0", 00:26:43.012 "optimal_io_boundary": 0 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "bdev_wait_for_examine" 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "nbd", 00:26:43.012 "config": [] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "scheduler", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "framework_set_scheduler", 00:26:43.012 "params": { 00:26:43.012 "name": "static" 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "subsystem": "nvmf", 00:26:43.012 "config": [ 00:26:43.012 { 00:26:43.012 "method": "nvmf_set_config", 00:26:43.012 "params": { 00:26:43.012 "discovery_filter": "match_any", 00:26:43.012 "admin_cmd_passthru": { 00:26:43.012 "identify_ctrlr": false 00:26:43.012 } 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_set_max_subsystems", 00:26:43.012 "params": { 00:26:43.012 "max_subsystems": 1024 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_set_crdt", 00:26:43.012 "params": { 00:26:43.012 "crdt1": 0, 00:26:43.012 "crdt2": 0, 00:26:43.012 "crdt3": 0 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_create_transport", 00:26:43.012 "params": { 00:26:43.012 "trtype": "TCP", 00:26:43.012 "max_queue_depth": 128, 00:26:43.012 "max_io_qpairs_per_ctrlr": 127, 00:26:43.012 "in_capsule_data_size": 4096, 00:26:43.012 "max_io_size": 131072, 00:26:43.012 "io_unit_size": 131072, 00:26:43.012 "max_aq_depth": 128, 00:26:43.012 "num_shared_buffers": 511, 00:26:43.012 "buf_cache_size": 4294967295, 00:26:43.012 "dif_insert_or_strip": false, 00:26:43.012 "zcopy": false, 00:26:43.012 "c2h_success": false, 00:26:43.012 "sock_priority": 0, 00:26:43.012 "abort_timeout_sec": 1, 00:26:43.012 "ack_timeout": 0, 00:26:43.012 "data_wr_pool_size": 0 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_create_subsystem", 00:26:43.012 "params": { 00:26:43.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.012 "allow_any_host": false, 00:26:43.012 "serial_number": "00000000000000000000", 00:26:43.012 "model_number": "SPDK bdev Controller", 00:26:43.012 "max_namespaces": 32, 00:26:43.012 "min_cntlid": 1, 00:26:43.012 "max_cntlid": 65519, 00:26:43.012 "ana_reporting": false 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_subsystem_add_host", 00:26:43.012 "params": { 00:26:43.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.012 "host": "nqn.2016-06.io.spdk:host1", 00:26:43.012 "psk": "key0" 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_subsystem_add_ns", 00:26:43.012 "params": { 00:26:43.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.012 "namespace": { 00:26:43.012 "nsid": 1, 00:26:43.012 "bdev_name": "malloc0", 00:26:43.012 "nguid": "AB5B8807835840208ECAAE2B5F660FC0", 00:26:43.012 "uuid": "ab5b8807-8358-4020-8eca-ae2b5f660fc0", 00:26:43.012 "no_auto_visible": false 00:26:43.012 } 00:26:43.012 } 00:26:43.012 }, 00:26:43.012 { 00:26:43.012 "method": "nvmf_subsystem_add_listener", 00:26:43.012 "params": { 00:26:43.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.012 "listen_address": { 00:26:43.012 "trtype": "TCP", 00:26:43.012 "adrfam": "IPv4", 00:26:43.012 "traddr": "10.0.0.2", 00:26:43.012 "trsvcid": "4420" 00:26:43.012 }, 00:26:43.012 "secure_channel": true 00:26:43.012 } 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 } 00:26:43.012 ] 00:26:43.012 }' 00:26:43.013 13:54:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:43.271 13:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:26:43.271 "subsystems": [ 00:26:43.271 { 00:26:43.271 "subsystem": "keyring", 00:26:43.271 "config": [ 00:26:43.271 { 00:26:43.271 "method": "keyring_file_add_key", 00:26:43.271 "params": { 00:26:43.271 "name": "key0", 00:26:43.271 "path": "/tmp/tmp.vNuJp2CO3D" 00:26:43.271 } 00:26:43.271 } 00:26:43.271 ] 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "subsystem": "iobuf", 00:26:43.271 "config": [ 00:26:43.271 { 00:26:43.271 "method": "iobuf_set_options", 00:26:43.271 "params": { 00:26:43.271 "small_pool_count": 8192, 00:26:43.271 "large_pool_count": 1024, 00:26:43.271 "small_bufsize": 8192, 00:26:43.271 "large_bufsize": 135168 00:26:43.271 } 00:26:43.271 } 00:26:43.271 ] 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "subsystem": "sock", 00:26:43.271 "config": [ 00:26:43.271 { 00:26:43.271 "method": "sock_set_default_impl", 00:26:43.271 "params": { 00:26:43.271 "impl_name": "posix" 00:26:43.271 } 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "method": "sock_impl_set_options", 00:26:43.271 "params": { 00:26:43.271 "impl_name": "ssl", 00:26:43.271 "recv_buf_size": 4096, 00:26:43.271 "send_buf_size": 4096, 00:26:43.271 "enable_recv_pipe": true, 00:26:43.271 "enable_quickack": false, 00:26:43.271 "enable_placement_id": 0, 00:26:43.271 "enable_zerocopy_send_server": true, 00:26:43.271 "enable_zerocopy_send_client": false, 00:26:43.271 "zerocopy_threshold": 0, 00:26:43.271 "tls_version": 0, 00:26:43.271 "enable_ktls": false 00:26:43.271 } 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "method": "sock_impl_set_options", 00:26:43.271 "params": { 00:26:43.271 "impl_name": "posix", 00:26:43.271 "recv_buf_size": 2097152, 00:26:43.271 "send_buf_size": 2097152, 00:26:43.271 "enable_recv_pipe": true, 00:26:43.271 "enable_quickack": false, 00:26:43.271 "enable_placement_id": 0, 00:26:43.271 "enable_zerocopy_send_server": true, 00:26:43.271 "enable_zerocopy_send_client": false, 00:26:43.271 "zerocopy_threshold": 0, 00:26:43.271 "tls_version": 0, 00:26:43.271 "enable_ktls": false 00:26:43.271 } 00:26:43.271 } 00:26:43.271 ] 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "subsystem": "vmd", 00:26:43.271 "config": [] 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "subsystem": "accel", 00:26:43.271 "config": [ 00:26:43.271 { 00:26:43.271 "method": "accel_set_options", 00:26:43.271 "params": { 00:26:43.271 "small_cache_size": 128, 00:26:43.271 "large_cache_size": 16, 00:26:43.271 "task_count": 2048, 00:26:43.271 "sequence_count": 2048, 00:26:43.271 "buf_count": 2048 00:26:43.271 } 00:26:43.271 } 00:26:43.271 ] 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "subsystem": "bdev", 00:26:43.271 "config": [ 00:26:43.271 { 00:26:43.271 "method": "bdev_set_options", 00:26:43.271 "params": { 00:26:43.271 "bdev_io_pool_size": 65535, 00:26:43.271 "bdev_io_cache_size": 256, 00:26:43.271 "bdev_auto_examine": true, 00:26:43.271 "iobuf_small_cache_size": 128, 00:26:43.271 "iobuf_large_cache_size": 16 00:26:43.271 } 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "method": "bdev_raid_set_options", 00:26:43.271 "params": { 00:26:43.271 "process_window_size_kb": 1024 00:26:43.271 } 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "method": "bdev_iscsi_set_options", 00:26:43.271 "params": { 00:26:43.271 "timeout_sec": 30 00:26:43.271 } 00:26:43.271 }, 00:26:43.271 { 00:26:43.271 "method": "bdev_nvme_set_options", 00:26:43.271 "params": { 00:26:43.271 "action_on_timeout": "none", 00:26:43.271 "timeout_us": 0, 00:26:43.271 "timeout_admin_us": 0, 00:26:43.271 "keep_alive_timeout_ms": 10000, 00:26:43.271 "arbitration_burst": 0, 00:26:43.271 "low_priority_weight": 0, 00:26:43.271 "medium_priority_weight": 0, 00:26:43.272 "high_priority_weight": 0, 00:26:43.272 "nvme_adminq_poll_period_us": 10000, 00:26:43.272 "nvme_ioq_poll_period_us": 0, 00:26:43.272 "io_queue_requests": 512, 00:26:43.272 "delay_cmd_submit": true, 00:26:43.272 "transport_retry_count": 4, 00:26:43.272 "bdev_retry_count": 3, 00:26:43.272 "transport_ack_timeout": 0, 00:26:43.272 "ctrlr_loss_timeout_sec": 0, 00:26:43.272 "reconnect_delay_sec": 0, 00:26:43.272 "fast_io_fail_timeout_sec": 0, 00:26:43.272 "disable_auto_failback": false, 00:26:43.272 "generate_uuids": false, 00:26:43.272 "transport_tos": 0, 00:26:43.272 "nvme_error_stat": false, 00:26:43.272 "rdma_srq_size": 0, 00:26:43.272 "io_path_stat": false, 00:26:43.272 "allow_accel_sequence": false, 00:26:43.272 "rdma_max_cq_size": 0, 00:26:43.272 "rdma_cm_event_timeout_ms": 0, 00:26:43.272 "dhchap_digests": [ 00:26:43.272 "sha256", 00:26:43.272 "sha384", 00:26:43.272 "sha512" 00:26:43.272 ], 00:26:43.272 "dhchap_dhgroups": [ 00:26:43.272 "null", 00:26:43.272 "ffdhe2048", 00:26:43.272 "ffdhe3072", 00:26:43.272 "ffdhe4096", 00:26:43.272 "ffdhe6144", 00:26:43.272 "ffdhe8192" 00:26:43.272 ] 00:26:43.272 } 00:26:43.272 }, 00:26:43.272 { 00:26:43.272 "method": "bdev_nvme_attach_controller", 00:26:43.272 "params": { 00:26:43.272 "name": "nvme0", 00:26:43.272 "trtype": "TCP", 00:26:43.272 "adrfam": "IPv4", 00:26:43.272 "traddr": "10.0.0.2", 00:26:43.272 "trsvcid": "4420", 00:26:43.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.272 "prchk_reftag": false, 00:26:43.272 "prchk_guard": false, 00:26:43.272 "ctrlr_loss_timeout_sec": 0, 00:26:43.272 "reconnect_delay_sec": 0, 00:26:43.272 "fast_io_fail_timeout_sec": 0, 00:26:43.272 "psk": "key0", 00:26:43.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.272 "hdgst": false, 00:26:43.272 "ddgst": false 00:26:43.272 } 00:26:43.272 }, 00:26:43.272 { 00:26:43.272 "method": "bdev_nvme_set_hotplug", 00:26:43.272 "params": { 00:26:43.272 "period_us": 100000, 00:26:43.272 "enable": false 00:26:43.272 } 00:26:43.272 }, 00:26:43.272 { 00:26:43.272 "method": "bdev_enable_histogram", 00:26:43.272 "params": { 00:26:43.272 "name": "nvme0n1", 00:26:43.272 "enable": true 00:26:43.272 } 00:26:43.272 }, 00:26:43.272 { 00:26:43.272 "method": "bdev_wait_for_examine" 00:26:43.272 } 00:26:43.272 ] 00:26:43.272 }, 00:26:43.272 { 00:26:43.272 "subsystem": "nbd", 00:26:43.272 "config": [] 00:26:43.272 } 00:26:43.272 ] 00:26:43.272 }' 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1490238 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1490238 ']' 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1490238 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1490238 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1490238' 00:26:43.272 killing process with pid 1490238 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1490238 00:26:43.272 Received shutdown signal, test time was about 1.000000 seconds 00:26:43.272 00:26:43.272 Latency(us) 00:26:43.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.272 =================================================================================================================== 00:26:43.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.272 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1490238 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1489967 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1489967 ']' 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1489967 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1489967 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1489967' 00:26:43.531 killing process with pid 1489967 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1489967 00:26:43.531 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1489967 00:26:43.790 13:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:26:43.790 13:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:43.790 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:43.790 13:54:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:26:43.790 "subsystems": [ 00:26:43.790 { 00:26:43.790 "subsystem": "keyring", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "keyring_file_add_key", 00:26:43.790 "params": { 00:26:43.790 "name": "key0", 00:26:43.790 "path": "/tmp/tmp.vNuJp2CO3D" 00:26:43.790 } 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "iobuf", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "iobuf_set_options", 00:26:43.790 "params": { 00:26:43.790 "small_pool_count": 8192, 00:26:43.790 "large_pool_count": 1024, 00:26:43.790 "small_bufsize": 8192, 00:26:43.790 "large_bufsize": 135168 00:26:43.790 } 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "sock", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "sock_set_default_impl", 00:26:43.790 "params": { 00:26:43.790 "impl_name": "posix" 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "sock_impl_set_options", 00:26:43.790 "params": { 00:26:43.790 "impl_name": "ssl", 00:26:43.790 "recv_buf_size": 4096, 00:26:43.790 "send_buf_size": 4096, 00:26:43.790 "enable_recv_pipe": true, 00:26:43.790 "enable_quickack": false, 00:26:43.790 "enable_placement_id": 0, 00:26:43.790 "enable_zerocopy_send_server": true, 00:26:43.790 "enable_zerocopy_send_client": false, 00:26:43.790 "zerocopy_threshold": 0, 00:26:43.790 "tls_version": 0, 00:26:43.790 "enable_ktls": false 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "sock_impl_set_options", 00:26:43.790 "params": { 00:26:43.790 "impl_name": "posix", 00:26:43.790 "recv_buf_size": 2097152, 00:26:43.790 "send_buf_size": 2097152, 00:26:43.790 "enable_recv_pipe": true, 00:26:43.790 "enable_quickack": false, 00:26:43.790 "enable_placement_id": 0, 00:26:43.790 "enable_zerocopy_send_server": true, 00:26:43.790 "enable_zerocopy_send_client": false, 00:26:43.790 "zerocopy_threshold": 0, 00:26:43.790 "tls_version": 0, 00:26:43.790 "enable_ktls": false 00:26:43.790 } 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "vmd", 00:26:43.790 "config": [] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "accel", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "accel_set_options", 00:26:43.790 "params": { 00:26:43.790 "small_cache_size": 128, 00:26:43.790 "large_cache_size": 16, 00:26:43.790 "task_count": 2048, 00:26:43.790 "sequence_count": 2048, 00:26:43.790 "buf_count": 2048 00:26:43.790 } 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "bdev", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "bdev_set_options", 00:26:43.790 "params": { 00:26:43.790 "bdev_io_pool_size": 65535, 00:26:43.790 "bdev_io_cache_size": 256, 00:26:43.790 "bdev_auto_examine": true, 00:26:43.790 "iobuf_small_cache_size": 128, 00:26:43.790 "iobuf_large_cache_size": 16 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_raid_set_options", 00:26:43.790 "params": { 00:26:43.790 "process_window_size_kb": 1024 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_iscsi_set_options", 00:26:43.790 "params": { 00:26:43.790 "timeout_sec": 30 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_nvme_set_options", 00:26:43.790 "params": { 00:26:43.790 "action_on_timeout": "none", 00:26:43.790 "timeout_us": 0, 00:26:43.790 "timeout_admin_us": 0, 00:26:43.790 "keep_alive_timeout_ms": 10000, 00:26:43.790 "arbitration_burst": 0, 00:26:43.790 "low_priority_weight": 0, 00:26:43.790 "medium_priority_weight": 0, 00:26:43.790 "high_priority_weight": 0, 00:26:43.790 "nvme_adminq_poll_period_us": 10000, 00:26:43.790 "nvme_ioq_poll_period_us": 0, 00:26:43.790 "io_queue_requests": 0, 00:26:43.790 "delay_cmd_submit": true, 00:26:43.790 "transport_retry_count": 4, 00:26:43.790 "bdev_retry_count": 3, 00:26:43.790 "transport_ack_timeout": 0, 00:26:43.790 "ctrlr_loss_timeout_sec": 0, 00:26:43.790 "reconnect_delay_sec": 0, 00:26:43.790 "fast_io_fail_timeout_sec": 0, 00:26:43.790 "disable_auto_failback": false, 00:26:43.790 "generate_uuids": false, 00:26:43.790 "transport_tos": 0, 00:26:43.790 "nvme_error_stat": false, 00:26:43.790 "rdma_srq_size": 0, 00:26:43.790 "io_path_stat": false, 00:26:43.790 "allow_accel_sequence": false, 00:26:43.790 "rdma_max_cq_size": 0, 00:26:43.790 "rdma_cm_event_timeout_ms": 0, 00:26:43.790 "dhchap_digests": [ 00:26:43.790 "sha256", 00:26:43.790 "sha384", 00:26:43.790 "sha512" 00:26:43.790 ], 00:26:43.790 "dhchap_dhgroups": [ 00:26:43.790 "null", 00:26:43.790 "ffdhe2048", 00:26:43.790 "ffdhe3072", 00:26:43.790 "ffdhe4096", 00:26:43.790 "ffdhe6144", 00:26:43.790 "ffdhe8192" 00:26:43.790 ] 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_nvme_set_hotplug", 00:26:43.790 "params": { 00:26:43.790 "period_us": 100000, 00:26:43.790 "enable": false 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_malloc_create", 00:26:43.790 "params": { 00:26:43.790 "name": "malloc0", 00:26:43.790 "num_blocks": 8192, 00:26:43.790 "block_size": 4096, 00:26:43.790 "physical_block_size": 4096, 00:26:43.790 "uuid": "ab5b8807-8358-4020-8eca-ae2b5f660fc0", 00:26:43.790 "optimal_io_boundary": 0 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "bdev_wait_for_examine" 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "nbd", 00:26:43.790 "config": [] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "scheduler", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "framework_set_scheduler", 00:26:43.790 "params": { 00:26:43.790 "name": "static" 00:26:43.790 } 00:26:43.790 } 00:26:43.790 ] 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "subsystem": "nvmf", 00:26:43.790 "config": [ 00:26:43.790 { 00:26:43.790 "method": "nvmf_set_config", 00:26:43.790 "params": { 00:26:43.790 "discovery_filter": "match_any", 00:26:43.790 "admin_cmd_passthru": { 00:26:43.790 "identify_ctrlr": false 00:26:43.790 } 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.790 "method": "nvmf_set_max_subsystems", 00:26:43.790 "params": { 00:26:43.790 "max_subsystems": 1024 00:26:43.790 } 00:26:43.790 }, 00:26:43.790 { 00:26:43.791 "method": "nvmf_set_crdt", 00:26:43.791 "params": { 00:26:43.791 "crdt1": 0, 00:26:43.791 "crdt2": 0, 00:26:43.791 "crdt3": 0 00:26:43.791 } 00:26:43.791 }, 00:26:43.791 { 00:26:43.791 "method": "nvmf_create_transport", 00:26:43.791 "params": { 00:26:43.791 "trtype": "TCP", 00:26:43.791 "max_queue_depth": 128, 00:26:43.791 "max_io_qpairs_per_ctrlr": 127, 00:26:43.791 "in_capsule_data_size": 4096, 00:26:43.791 "max_io_size": 131072, 00:26:43.791 "io_unit_size": 131072, 00:26:43.791 "max_aq_depth": 128, 00:26:43.791 "num_shared_buffers": 511, 00:26:43.791 "buf_cache_size": 4294967295, 00:26:43.791 "dif_insert_or_strip": false, 00:26:43.791 "zcopy": false, 00:26:43.791 "c2h_success": false, 00:26:43.791 "sock_priority": 0, 00:26:43.791 "abort_timeout_sec": 1, 00:26:43.791 "ack_timeout": 0, 00:26:43.791 "data_wr_pool_size": 0 00:26:43.791 } 00:26:43.791 }, 00:26:43.791 { 00:26:43.791 "method": "nvmf_create_subsystem", 00:26:43.791 "params": { 00:26:43.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.791 "allow_any_host": false, 00:26:43.791 "serial_number": "00000000000000000000", 00:26:43.791 "model_number": "SPDK bdev Controller", 00:26:43.791 "max_namespaces": 32, 00:26:43.791 "min_cntlid": 1, 00:26:43.791 "max_cntlid": 65519, 00:26:43.791 "ana_reporting": false 00:26:43.791 } 00:26:43.791 }, 00:26:43.791 { 00:26:43.791 "method": "nvmf_subsystem_add_host", 00:26:43.791 "params": { 00:26:43.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.791 "host": "nqn.2016-06.io.spdk:host1", 00:26:43.791 "psk": "key0" 00:26:43.791 } 00:26:43.791 }, 00:26:43.791 { 00:26:43.791 "method": "nvmf_subsystem_add_ns", 00:26:43.791 "params": { 00:26:43.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.791 "namespace": { 00:26:43.791 "nsid": 1, 00:26:43.791 "bdev_name": "malloc0", 00:26:43.791 "nguid": "AB5B8807835840208ECAAE2B5F660FC0", 00:26:43.791 "uuid": "ab5b8807-8358-4020-8eca-ae2b5f660fc0", 00:26:43.791 "no_auto_visible": false 00:26:43.791 } 00:26:43.791 } 00:26:43.791 }, 00:26:43.791 { 00:26:43.791 "method": "nvmf_subsystem_add_listener", 00:26:43.791 "params": { 00:26:43.791 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.791 "listen_address": { 00:26:43.791 "trtype": "TCP", 00:26:43.791 "adrfam": "IPv4", 00:26:43.791 "traddr": "10.0.0.2", 00:26:43.791 "trsvcid": "4420" 00:26:43.791 }, 00:26:43.791 "secure_channel": true 00:26:43.791 } 00:26:43.791 } 00:26:43.791 ] 00:26:43.791 } 00:26:43.791 ] 00:26:43.791 }' 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1490811 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1490811 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1490811 ']' 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:43.791 13:54:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:43.791 [2024-06-11 13:54:36.601171] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:43.791 [2024-06-11 13:54:36.601233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.791 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.049 [2024-06-11 13:54:36.708886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.049 [2024-06-11 13:54:36.788729] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.049 [2024-06-11 13:54:36.788778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.049 [2024-06-11 13:54:36.788791] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.049 [2024-06-11 13:54:36.788803] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.049 [2024-06-11 13:54:36.788813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.049 [2024-06-11 13:54:36.788897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.308 [2024-06-11 13:54:37.007006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.308 [2024-06-11 13:54:37.039017] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:44.308 [2024-06-11 13:54:37.046835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1491084 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1491084 /var/tmp/bdevperf.sock 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1491084 ']' 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:44.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:44.875 13:54:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:26:44.875 "subsystems": [ 00:26:44.875 { 00:26:44.875 "subsystem": "keyring", 00:26:44.875 "config": [ 00:26:44.875 { 00:26:44.875 "method": "keyring_file_add_key", 00:26:44.875 "params": { 00:26:44.875 "name": "key0", 00:26:44.875 "path": "/tmp/tmp.vNuJp2CO3D" 00:26:44.875 } 00:26:44.875 } 00:26:44.875 ] 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "subsystem": "iobuf", 00:26:44.875 "config": [ 00:26:44.875 { 00:26:44.875 "method": "iobuf_set_options", 00:26:44.875 "params": { 00:26:44.875 "small_pool_count": 8192, 00:26:44.875 "large_pool_count": 1024, 00:26:44.875 "small_bufsize": 8192, 00:26:44.875 "large_bufsize": 135168 00:26:44.875 } 00:26:44.875 } 00:26:44.875 ] 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "subsystem": "sock", 00:26:44.875 "config": [ 00:26:44.875 { 00:26:44.875 "method": "sock_set_default_impl", 00:26:44.875 "params": { 00:26:44.875 "impl_name": "posix" 00:26:44.875 } 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "method": "sock_impl_set_options", 00:26:44.875 "params": { 00:26:44.875 "impl_name": "ssl", 00:26:44.875 "recv_buf_size": 4096, 00:26:44.875 "send_buf_size": 4096, 00:26:44.875 "enable_recv_pipe": true, 00:26:44.875 "enable_quickack": false, 00:26:44.875 "enable_placement_id": 0, 00:26:44.875 "enable_zerocopy_send_server": true, 00:26:44.875 "enable_zerocopy_send_client": false, 00:26:44.875 "zerocopy_threshold": 0, 00:26:44.875 "tls_version": 0, 00:26:44.875 "enable_ktls": false 00:26:44.875 } 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "method": "sock_impl_set_options", 00:26:44.875 "params": { 00:26:44.875 "impl_name": "posix", 00:26:44.875 "recv_buf_size": 2097152, 00:26:44.875 "send_buf_size": 2097152, 00:26:44.875 "enable_recv_pipe": true, 00:26:44.875 "enable_quickack": false, 00:26:44.875 "enable_placement_id": 0, 00:26:44.875 "enable_zerocopy_send_server": true, 00:26:44.875 "enable_zerocopy_send_client": false, 00:26:44.875 "zerocopy_threshold": 0, 00:26:44.875 "tls_version": 0, 00:26:44.875 "enable_ktls": false 00:26:44.875 } 00:26:44.875 } 00:26:44.875 ] 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "subsystem": "vmd", 00:26:44.875 "config": [] 00:26:44.875 }, 00:26:44.875 { 00:26:44.875 "subsystem": "accel", 00:26:44.875 "config": [ 00:26:44.875 { 00:26:44.876 "method": "accel_set_options", 00:26:44.876 "params": { 00:26:44.876 "small_cache_size": 128, 00:26:44.876 "large_cache_size": 16, 00:26:44.876 "task_count": 2048, 00:26:44.876 "sequence_count": 2048, 00:26:44.876 "buf_count": 2048 00:26:44.876 } 00:26:44.876 } 00:26:44.876 ] 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "subsystem": "bdev", 00:26:44.876 "config": [ 00:26:44.876 { 00:26:44.876 "method": "bdev_set_options", 00:26:44.876 "params": { 00:26:44.876 "bdev_io_pool_size": 65535, 00:26:44.876 "bdev_io_cache_size": 256, 00:26:44.876 "bdev_auto_examine": true, 00:26:44.876 "iobuf_small_cache_size": 128, 00:26:44.876 "iobuf_large_cache_size": 16 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_raid_set_options", 00:26:44.876 "params": { 00:26:44.876 "process_window_size_kb": 1024 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_iscsi_set_options", 00:26:44.876 "params": { 00:26:44.876 "timeout_sec": 30 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_nvme_set_options", 00:26:44.876 "params": { 00:26:44.876 "action_on_timeout": "none", 00:26:44.876 "timeout_us": 0, 00:26:44.876 "timeout_admin_us": 0, 00:26:44.876 "keep_alive_timeout_ms": 10000, 00:26:44.876 "arbitration_burst": 0, 00:26:44.876 "low_priority_weight": 0, 00:26:44.876 "medium_priority_weight": 0, 00:26:44.876 "high_priority_weight": 0, 00:26:44.876 "nvme_adminq_poll_period_us": 10000, 00:26:44.876 "nvme_ioq_poll_period_us": 0, 00:26:44.876 "io_queue_requests": 512, 00:26:44.876 "delay_cmd_submit": true, 00:26:44.876 "transport_retry_count": 4, 00:26:44.876 "bdev_retry_count": 3, 00:26:44.876 "transport_ack_timeout": 0, 00:26:44.876 "ctrlr_loss_timeout_sec": 0, 00:26:44.876 "reconnect_delay_sec": 0, 00:26:44.876 "fast_io_fail_timeout_sec": 0, 00:26:44.876 "disable_auto_failback": false, 00:26:44.876 "generate_uuids": false, 00:26:44.876 "transport_tos": 0, 00:26:44.876 "nvme_error_stat": false, 00:26:44.876 "rdma_srq_size": 0, 00:26:44.876 "io_path_stat": false, 00:26:44.876 "allow_accel_sequence": false, 00:26:44.876 "rdma_max_cq_size": 0, 00:26:44.876 "rdma_cm_event_timeout_ms": 0, 00:26:44.876 "dhchap_digests": [ 00:26:44.876 "sha256", 00:26:44.876 "sha384", 00:26:44.876 "sha512" 00:26:44.876 ], 00:26:44.876 "dhchap_dhgroups": [ 00:26:44.876 "null", 00:26:44.876 "ffdhe2048", 00:26:44.876 "ffdhe3072", 00:26:44.876 "ffdhe4096", 00:26:44.876 "ffdhe6144", 00:26:44.876 "ffdhe8192" 00:26:44.876 ] 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_nvme_attach_controller", 00:26:44.876 "params": { 00:26:44.876 "name": "nvme0", 00:26:44.876 "trtype": "TCP", 00:26:44.876 "adrfam": "IPv4", 00:26:44.876 "traddr": "10.0.0.2", 00:26:44.876 "trsvcid": "4420", 00:26:44.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.876 "prchk_reftag": false, 00:26:44.876 "prchk_guard": false, 00:26:44.876 "ctrlr_loss_timeout_sec": 0, 00:26:44.876 "reconnect_delay_sec": 0, 00:26:44.876 "fast_io_fail_timeout_sec": 0, 00:26:44.876 "psk": "key0", 00:26:44.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:44.876 "hdgst": false, 00:26:44.876 "ddgst": false 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_nvme_set_hotplug", 00:26:44.876 "params": { 00:26:44.876 "period_us": 100000, 00:26:44.876 "enable": false 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_enable_histogram", 00:26:44.876 "params": { 00:26:44.876 "name": "nvme0n1", 00:26:44.876 "enable": true 00:26:44.876 } 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "method": "bdev_wait_for_examine" 00:26:44.876 } 00:26:44.876 ] 00:26:44.876 }, 00:26:44.876 { 00:26:44.876 "subsystem": "nbd", 00:26:44.876 "config": [] 00:26:44.876 } 00:26:44.876 ] 00:26:44.876 }' 00:26:44.876 13:54:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:44.876 [2024-06-11 13:54:37.591448] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:44.876 [2024-06-11 13:54:37.591527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491084 ] 00:26:44.876 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.876 [2024-06-11 13:54:37.684695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.876 [2024-06-11 13:54:37.769140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.134 [2024-06-11 13:54:37.925028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:45.700 13:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:45.700 13:54:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:26:45.700 13:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:45.700 13:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:26:45.958 13:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.958 13:54:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:45.958 Running I/O for 1 seconds... 00:26:47.331 00:26:47.331 Latency(us) 00:26:47.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.331 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:47.331 Verification LBA range: start 0x0 length 0x2000 00:26:47.331 nvme0n1 : 1.02 3938.52 15.38 0.00 0.00 32128.54 6317.67 69206.02 00:26:47.332 =================================================================================================================== 00:26:47.332 Total : 3938.52 15.38 0.00 0.00 32128.54 6317.67 69206.02 00:26:47.332 0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:47.332 nvmf_trace.0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1491084 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1491084 ']' 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1491084 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:47.332 13:54:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1491084 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1491084' 00:26:47.332 killing process with pid 1491084 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1491084 00:26:47.332 Received shutdown signal, test time was about 1.000000 seconds 00:26:47.332 00:26:47.332 Latency(us) 00:26:47.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.332 =================================================================================================================== 00:26:47.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1491084 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.332 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.590 rmmod nvme_tcp 00:26:47.590 rmmod nvme_fabrics 00:26:47.590 rmmod nvme_keyring 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1490811 ']' 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1490811 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1490811 ']' 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1490811 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1490811 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1490811' 00:26:47.591 killing process with pid 1490811 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1490811 00:26:47.591 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1490811 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.849 13:54:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.753 13:54:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.753 13:54:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Bik4gMGAL6 /tmp/tmp.Vh9VfpHO3d /tmp/tmp.vNuJp2CO3D 00:26:49.753 00:26:49.753 real 1m27.250s 00:26:49.753 user 2m8.755s 00:26:49.753 sys 0m34.864s 00:26:49.753 13:54:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:49.753 13:54:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:49.753 ************************************ 00:26:49.753 END TEST nvmf_tls 00:26:49.753 ************************************ 00:26:50.012 13:54:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:50.012 13:54:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:50.012 13:54:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:50.012 13:54:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.012 ************************************ 00:26:50.012 START TEST nvmf_fips 00:26:50.012 ************************************ 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:50.012 * Looking for test storage... 00:26:50.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.012 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:50.013 13:54:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:26:50.271 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:50.271 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:50.271 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:26:50.272 13:54:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:26:50.272 Error setting digest 00:26:50.272 00826B2F4F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:50.272 00826B2F4F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.272 13:54:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:56.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.855 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:56.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:56.856 Found net devices under 0000:af:00.0: cvl_0_0 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:56.856 Found net devices under 0000:af:00.1: cvl_0_1 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.856 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.116 13:54:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.116 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:26:57.382 00:26:57.382 --- 10.0.0.2 ping statistics --- 00:26:57.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.382 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:57.382 00:26:57.382 --- 10.0.0.1 ping statistics --- 00:26:57.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.382 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1495296 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1495296 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1495296 ']' 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:57.382 13:54:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:57.382 [2024-06-11 13:54:50.177924] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:57.382 [2024-06-11 13:54:50.177993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.382 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.382 [2024-06-11 13:54:50.274916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.651 [2024-06-11 13:54:50.358768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.651 [2024-06-11 13:54:50.358814] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.651 [2024-06-11 13:54:50.358828] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.651 [2024-06-11 13:54:50.358840] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.651 [2024-06-11 13:54:50.358850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.651 [2024-06-11 13:54:50.358885] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:58.219 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:58.478 [2024-06-11 13:54:51.255145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.478 [2024-06-11 13:54:51.271149] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:58.478 [2024-06-11 13:54:51.271383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.478 [2024-06-11 13:54:51.300542] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:58.478 malloc0 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1495407 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1495407 /var/tmp/bdevperf.sock 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1495407 ']' 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:58.478 13:54:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:58.737 [2024-06-11 13:54:51.393734] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:26:58.737 [2024-06-11 13:54:51.393800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495407 ] 00:26:58.737 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.737 [2024-06-11 13:54:51.472743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.737 [2024-06-11 13:54:51.542288] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.673 13:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:59.673 13:54:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:26:59.674 13:54:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:26:59.674 [2024-06-11 13:54:52.477322] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:59.674 [2024-06-11 13:54:52.477415] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:59.674 TLSTESTn1 00:26:59.674 13:54:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:59.933 Running I/O for 10 seconds... 00:27:09.914 00:27:09.914 Latency(us) 00:27:09.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.914 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.914 Verification LBA range: start 0x0 length 0x2000 00:27:09.914 TLSTESTn1 : 10.03 4487.43 17.53 0.00 0.00 28471.78 6527.39 48444.21 00:27:09.914 =================================================================================================================== 00:27:09.914 Total : 4487.43 17.53 0.00 0.00 28471.78 6527.39 48444.21 00:27:09.914 0 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:27:09.914 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:09.914 nvmf_trace.0 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1495407 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1495407 ']' 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1495407 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1495407 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1495407' 00:27:10.173 killing process with pid 1495407 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1495407 00:27:10.173 Received shutdown signal, test time was about 10.000000 seconds 00:27:10.173 00:27:10.173 Latency(us) 00:27:10.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.173 =================================================================================================================== 00:27:10.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.173 [2024-06-11 13:55:02.899955] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:10.173 13:55:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1495407 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.173 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.436 rmmod nvme_tcp 00:27:10.436 rmmod nvme_fabrics 00:27:10.436 rmmod nvme_keyring 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1495296 ']' 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1495296 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1495296 ']' 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1495296 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1495296 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1495296' 00:27:10.436 killing process with pid 1495296 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1495296 00:27:10.436 [2024-06-11 13:55:03.200197] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:10.436 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1495296 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.702 13:55:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.611 13:55:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.611 13:55:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:27:12.611 00:27:12.611 real 0m22.757s 00:27:12.611 user 0m23.122s 00:27:12.611 sys 0m11.033s 00:27:12.611 13:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:12.611 13:55:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:12.611 ************************************ 00:27:12.611 END TEST nvmf_fips 00:27:12.611 ************************************ 00:27:12.870 13:55:05 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:27:12.870 13:55:05 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:12.870 13:55:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:12.870 13:55:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:12.870 13:55:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.870 ************************************ 00:27:12.870 START TEST nvmf_fuzz 00:27:12.870 ************************************ 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:12.870 * Looking for test storage... 00:27:12.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.870 13:55:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.871 13:55:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:19.438 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:19.438 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:19.438 Found net devices under 0000:af:00.0: cvl_0_0 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:19.438 Found net devices under 0000:af:00.1: cvl_0_1 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.438 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:19.697 00:27:19.697 --- 10.0.0.2 ping statistics --- 00:27:19.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.697 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:27:19.697 00:27:19.697 --- 10.0.0.1 ping statistics --- 00:27:19.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.697 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1501069 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1501069 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1501069 ']' 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:19.697 13:55:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 Malloc0 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.074 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:21.075 13:55:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:53.157 Fuzzing completed. Shutting down the fuzz application 00:27:53.157 00:27:53.157 Dumping successful admin opcodes: 00:27:53.157 8, 9, 10, 24, 00:27:53.157 Dumping successful io opcodes: 00:27:53.157 0, 9, 00:27:53.157 NS: 0x200003aeff00 I/O qp, Total commands completed: 572211, total successful commands: 3331, random_seed: 1197613312 00:27:53.157 NS: 0x200003aeff00 admin qp, Total commands completed: 63287, total successful commands: 497, random_seed: 4091363776 00:27:53.157 13:55:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:53.157 Fuzzing completed. Shutting down the fuzz application 00:27:53.157 00:27:53.157 Dumping successful admin opcodes: 00:27:53.157 24, 00:27:53.157 Dumping successful io opcodes: 00:27:53.157 00:27:53.157 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1871886032 00:27:53.157 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1871997284 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.157 13:55:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.157 rmmod nvme_tcp 00:27:53.157 rmmod nvme_fabrics 00:27:53.157 rmmod nvme_keyring 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1501069 ']' 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1501069 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1501069 ']' 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 1501069 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:53.157 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1501069 00:27:53.415 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:53.415 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:53.415 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1501069' 00:27:53.415 killing process with pid 1501069 00:27:53.415 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 1501069 00:27:53.415 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 1501069 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.673 13:55:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.630 13:55:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.630 13:55:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:55.630 00:27:55.630 real 0m42.877s 00:27:55.630 user 0m53.583s 00:27:55.630 sys 0m19.035s 00:27:55.630 13:55:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:55.630 13:55:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:55.630 ************************************ 00:27:55.630 END TEST nvmf_fuzz 00:27:55.630 ************************************ 00:27:55.630 13:55:48 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:55.630 13:55:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:55.630 13:55:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:55.630 13:55:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:55.890 ************************************ 00:27:55.890 START TEST nvmf_multiconnection 00:27:55.890 ************************************ 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:55.890 * Looking for test storage... 00:27:55.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.890 13:55:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:02.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:02.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.465 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:02.466 Found net devices under 0000:af:00.0: cvl_0_0 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:02.466 Found net devices under 0000:af:00.1: cvl_0_1 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.466 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:28:02.726 00:28:02.726 --- 10.0.0.2 ping statistics --- 00:28:02.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.726 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:28:02.726 00:28:02.726 --- 10.0.0.1 ping statistics --- 00:28:02.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.726 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1510180 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1510180 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 1510180 ']' 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:02.726 13:55:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:02.985 [2024-06-11 13:55:55.672929] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:28:02.985 [2024-06-11 13:55:55.672995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.985 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.985 [2024-06-11 13:55:55.782827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:02.985 [2024-06-11 13:55:55.872299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.985 [2024-06-11 13:55:55.872342] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.985 [2024-06-11 13:55:55.872356] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.985 [2024-06-11 13:55:55.872367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.985 [2024-06-11 13:55:55.872377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.985 [2024-06-11 13:55:55.872432] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.985 [2024-06-11 13:55:55.872517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.985 [2024-06-11 13:55:55.872664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.985 [2024-06-11 13:55:55.872666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 [2024-06-11 13:55:56.643718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 Malloc1 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 [2024-06-11 13:55:56.707515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 Malloc2 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 Malloc3 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 Malloc4 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:28:03.922 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.181 Malloc5 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.181 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 Malloc6 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 Malloc7 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 Malloc8 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 Malloc9 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.182 Malloc10 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.182 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 Malloc11 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.441 13:55:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:05.816 13:55:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:28:05.816 13:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:05.816 13:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:05.816 13:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:05.816 13:55:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:07.718 13:56:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:28:09.095 13:56:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:28:09.095 13:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:09.095 13:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:09.095 13:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:09.095 13:56:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.000 13:56:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:28:12.415 13:56:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:28:12.415 13:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:12.415 13:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:12.415 13:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:12.415 13:56:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:14.949 13:56:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:28:16.327 13:56:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:28:16.327 13:56:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:16.327 13:56:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:16.327 13:56:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:16.327 13:56:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:18.237 13:56:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:28:19.616 13:56:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:28:19.616 13:56:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:19.616 13:56:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:19.616 13:56:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:19.616 13:56:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:21.549 13:56:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:28:23.465 13:56:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:28:23.465 13:56:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:23.465 13:56:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:23.465 13:56:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:23.465 13:56:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:25.370 13:56:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:28:26.748 13:56:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:28:26.748 13:56:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:26.748 13:56:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:26.748 13:56:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:26.748 13:56:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:28.655 13:56:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:28:30.034 13:56:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:28:30.034 13:56:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:30.034 13:56:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:30.034 13:56:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:30.034 13:56:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:32.569 13:56:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:28:33.511 13:56:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:33.511 13:56:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:33.511 13:56:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:33.511 13:56:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:33.511 13:56:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:35.535 13:56:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:28:37.440 13:56:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:37.440 13:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:37.440 13:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:37.440 13:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:37.440 13:56:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:39.346 13:56:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:28:41.250 13:56:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:41.250 13:56:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:41.250 13:56:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:41.250 13:56:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:41.250 13:56:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:43.154 13:56:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:43.154 [global] 00:28:43.154 thread=1 00:28:43.154 invalidate=1 00:28:43.154 rw=read 00:28:43.154 time_based=1 00:28:43.154 runtime=10 00:28:43.154 ioengine=libaio 00:28:43.154 direct=1 00:28:43.154 bs=262144 00:28:43.154 iodepth=64 00:28:43.154 norandommap=1 00:28:43.154 numjobs=1 00:28:43.154 00:28:43.154 [job0] 00:28:43.154 filename=/dev/nvme0n1 00:28:43.154 [job1] 00:28:43.154 filename=/dev/nvme10n1 00:28:43.154 [job2] 00:28:43.154 filename=/dev/nvme1n1 00:28:43.154 [job3] 00:28:43.154 filename=/dev/nvme2n1 00:28:43.154 [job4] 00:28:43.154 filename=/dev/nvme3n1 00:28:43.154 [job5] 00:28:43.154 filename=/dev/nvme4n1 00:28:43.154 [job6] 00:28:43.154 filename=/dev/nvme5n1 00:28:43.154 [job7] 00:28:43.154 filename=/dev/nvme6n1 00:28:43.154 [job8] 00:28:43.154 filename=/dev/nvme7n1 00:28:43.154 [job9] 00:28:43.154 filename=/dev/nvme8n1 00:28:43.154 [job10] 00:28:43.154 filename=/dev/nvme9n1 00:28:43.154 Could not set queue depth (nvme0n1) 00:28:43.154 Could not set queue depth (nvme10n1) 00:28:43.154 Could not set queue depth (nvme1n1) 00:28:43.154 Could not set queue depth (nvme2n1) 00:28:43.154 Could not set queue depth (nvme3n1) 00:28:43.154 Could not set queue depth (nvme4n1) 00:28:43.154 Could not set queue depth (nvme5n1) 00:28:43.154 Could not set queue depth (nvme6n1) 00:28:43.154 Could not set queue depth (nvme7n1) 00:28:43.154 Could not set queue depth (nvme8n1) 00:28:43.154 Could not set queue depth (nvme9n1) 00:28:43.738 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:43.738 fio-3.35 00:28:43.738 Starting 11 threads 00:28:55.998 00:28:55.998 job0: (groupid=0, jobs=1): err= 0: pid=1517261: Tue Jun 11 13:56:47 2024 00:28:55.998 read: IOPS=935, BW=234MiB/s (245MB/s)(2356MiB/10071msec) 00:28:55.998 slat (usec): min=12, max=47607, avg=869.64, stdev=2672.05 00:28:55.998 clat (usec): min=1600, max=186422, avg=67372.52, stdev=28282.35 00:28:55.998 lat (usec): min=1636, max=192887, avg=68242.16, stdev=28579.61 00:28:55.998 clat percentiles (msec): 00:28:55.998 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 39], 20.00th=[ 48], 00:28:55.998 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 68], 00:28:55.998 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 128], 00:28:55.998 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 184], 00:28:55.998 | 99.99th=[ 186] 00:28:55.998 bw ( KiB/s): min=154624, max=365349, per=12.59%, avg=239561.60, stdev=59692.65, samples=20 00:28:55.998 iops : min= 604, max= 1427, avg=935.75, stdev=233.19, samples=20 00:28:55.998 lat (msec) : 2=0.01%, 4=0.18%, 10=0.44%, 20=1.54%, 50=21.59% 00:28:55.998 lat (msec) : 100=66.17%, 250=10.08% 00:28:55.998 cpu : usr=0.57%, sys=3.84%, ctx=1984, majf=0, minf=4097 00:28:55.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:28:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.998 issued rwts: total=9423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.998 job1: (groupid=0, jobs=1): err= 0: pid=1517274: Tue Jun 11 13:56:47 2024 00:28:55.998 read: IOPS=514, BW=129MiB/s (135MB/s)(1296MiB/10070msec) 00:28:55.998 slat (usec): min=15, max=206655, avg=1718.56, stdev=6983.85 00:28:55.998 clat (msec): min=2, max=434, avg=122.46, stdev=68.69 00:28:55.998 lat (msec): min=2, max=434, avg=124.18, stdev=69.78 00:28:55.998 clat percentiles (msec): 00:28:55.998 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 21], 20.00th=[ 77], 00:28:55.998 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 123], 00:28:55.998 | 70.00th=[ 159], 80.00th=[ 197], 90.00th=[ 224], 95.00th=[ 239], 00:28:55.998 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 338], 99.95th=[ 384], 00:28:55.998 | 99.99th=[ 435] 00:28:55.998 bw ( KiB/s): min=71680, max=333312, per=6.89%, avg=131087.80, stdev=59815.48, samples=20 00:28:55.998 iops : min= 280, max= 1302, avg=512.05, stdev=233.66, samples=20 00:28:55.998 lat (msec) : 4=1.39%, 10=4.21%, 20=4.36%, 50=3.61%, 100=30.13% 00:28:55.998 lat (msec) : 250=54.63%, 500=1.68% 00:28:55.998 cpu : usr=0.38%, sys=2.36%, ctx=1200, majf=0, minf=4097 00:28:55.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.998 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.998 job2: (groupid=0, jobs=1): err= 0: pid=1517292: Tue Jun 11 13:56:47 2024 00:28:55.998 read: IOPS=451, BW=113MiB/s (118MB/s)(1140MiB/10096msec) 00:28:55.998 slat (usec): min=15, max=80835, avg=1755.85, stdev=5937.21 00:28:55.998 clat (msec): min=5, max=296, avg=139.74, stdev=60.43 00:28:55.998 lat (msec): min=5, max=296, avg=141.50, stdev=61.54 00:28:55.998 clat percentiles (msec): 00:28:55.998 | 1.00th=[ 9], 5.00th=[ 50], 10.00th=[ 74], 20.00th=[ 89], 00:28:55.998 | 30.00th=[ 100], 40.00th=[ 113], 50.00th=[ 134], 60.00th=[ 150], 00:28:55.998 | 70.00th=[ 178], 80.00th=[ 209], 90.00th=[ 226], 95.00th=[ 236], 00:28:55.998 | 99.00th=[ 249], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 271], 00:28:55.998 | 99.99th=[ 296] 00:28:55.999 bw ( KiB/s): min=69120, max=187904, per=6.05%, avg=115144.85, stdev=41925.52, samples=20 00:28:55.999 iops : min= 270, max= 734, avg=449.75, stdev=163.80, samples=20 00:28:55.999 lat (msec) : 10=1.67%, 20=0.92%, 50=2.54%, 100=25.70%, 250=68.21% 00:28:55.999 lat (msec) : 500=0.96% 00:28:55.999 cpu : usr=0.33%, sys=1.97%, ctx=1176, majf=0, minf=4097 00:28:55.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.999 issued rwts: total=4561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.999 job3: (groupid=0, jobs=1): err= 0: pid=1517306: Tue Jun 11 13:56:47 2024 00:28:55.999 read: IOPS=722, BW=181MiB/s (189MB/s)(1818MiB/10066msec) 00:28:55.999 slat (usec): min=13, max=185156, avg=998.24, stdev=5854.11 00:28:55.999 clat (usec): min=1163, max=389747, avg=87505.58, stdev=67481.67 00:28:55.999 lat (usec): min=1202, max=389826, avg=88503.82, stdev=68286.30 00:28:55.999 clat percentiles (msec): 00:28:55.999 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 18], 20.00th=[ 30], 00:28:55.999 | 30.00th=[ 46], 40.00th=[ 57], 50.00th=[ 70], 60.00th=[ 82], 00:28:55.999 | 70.00th=[ 102], 80.00th=[ 140], 90.00th=[ 215], 95.00th=[ 230], 00:28:55.999 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 330], 99.95th=[ 351], 00:28:55.999 | 99.99th=[ 388] 00:28:55.999 bw ( KiB/s): min=68096, max=318464, per=9.70%, avg=184548.55, stdev=76656.69, samples=20 00:28:55.999 iops : min= 266, max= 1244, avg=720.80, stdev=299.35, samples=20 00:28:55.999 lat (msec) : 2=0.08%, 4=0.54%, 10=4.63%, 20=6.71%, 50=21.23% 00:28:55.999 lat (msec) : 100=36.23%, 250=29.74%, 500=0.83% 00:28:55.999 cpu : usr=0.34%, sys=3.05%, ctx=1696, majf=0, minf=4097 00:28:55.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.999 issued rwts: total=7272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.999 job4: (groupid=0, jobs=1): err= 0: pid=1517316: Tue Jun 11 13:56:47 2024 00:28:55.999 read: IOPS=572, BW=143MiB/s (150MB/s)(1441MiB/10075msec) 00:28:55.999 slat (usec): min=12, max=75902, avg=1265.11, stdev=5042.35 00:28:55.999 clat (usec): min=1477, max=284928, avg=110465.03, stdev=70640.17 00:28:55.999 lat (usec): min=1530, max=297401, avg=111730.14, stdev=71782.00 00:28:55.999 clat percentiles (msec): 00:28:55.999 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 41], 00:28:55.999 | 30.00th=[ 62], 40.00th=[ 79], 50.00th=[ 94], 60.00th=[ 120], 00:28:55.999 | 70.00th=[ 148], 80.00th=[ 188], 90.00th=[ 222], 95.00th=[ 230], 00:28:55.999 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 284], 99.95th=[ 284], 00:28:55.999 | 99.99th=[ 284] 00:28:55.999 bw ( KiB/s): min=70514, max=284160, per=7.67%, avg=145891.50, stdev=75312.46, samples=20 00:28:55.999 iops : min= 275, max= 1110, avg=569.85, stdev=294.20, samples=20 00:28:55.999 lat (msec) : 2=0.07%, 4=0.16%, 10=1.86%, 20=4.29%, 50=19.40% 00:28:55.999 lat (msec) : 100=27.73%, 250=45.62%, 500=0.88% 00:28:55.999 cpu : usr=0.30%, sys=2.39%, ctx=1535, majf=0, minf=3221 00:28:55.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.999 issued rwts: total=5763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.999 job5: (groupid=0, jobs=1): err= 0: pid=1517354: Tue Jun 11 13:56:47 2024 00:28:55.999 read: IOPS=672, BW=168MiB/s (176MB/s)(1697MiB/10093msec) 00:28:55.999 slat (usec): min=12, max=170446, avg=1053.25, stdev=4813.70 00:28:55.999 clat (usec): min=1553, max=271269, avg=94008.02, stdev=64245.10 00:28:55.999 lat (usec): min=1603, max=302021, avg=95061.28, stdev=65185.86 00:28:55.999 clat percentiles (msec): 00:28:55.999 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 31], 20.00th=[ 48], 00:28:55.999 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 75], 60.00th=[ 88], 00:28:55.999 | 70.00th=[ 102], 80.00th=[ 126], 90.00th=[ 218], 95.00th=[ 234], 00:28:55.999 | 99.00th=[ 257], 99.50th=[ 268], 99.90th=[ 271], 99.95th=[ 271], 00:28:55.999 | 99.99th=[ 271] 00:28:55.999 bw ( KiB/s): min=69120, max=331776, per=9.05%, avg=172112.05, stdev=78321.43, samples=20 00:28:55.999 iops : min= 270, max= 1296, avg=672.25, stdev=305.98, samples=20 00:28:55.999 lat (msec) : 2=0.06%, 4=0.55%, 10=1.56%, 20=3.70%, 50=16.33% 00:28:55.999 lat (msec) : 100=47.49%, 250=28.35%, 500=1.97% 00:28:55.999 cpu : usr=0.40%, sys=2.71%, ctx=1688, majf=0, minf=4097 00:28:55.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:55.999 issued rwts: total=6787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:55.999 job6: (groupid=0, jobs=1): err= 0: pid=1517372: Tue Jun 11 13:56:47 2024 00:28:55.999 read: IOPS=806, BW=202MiB/s (211MB/s)(2034MiB/10090msec) 00:28:55.999 slat (usec): min=13, max=159710, avg=972.95, stdev=4016.11 00:28:55.999 clat (msec): min=4, max=299, avg=78.29, stdev=42.20 00:28:55.999 lat (msec): min=4, max=386, avg=79.26, stdev=42.68 00:28:55.999 clat percentiles (msec): 00:28:55.999 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 39], 20.00th=[ 52], 00:28:55.999 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 79], 00:28:55.999 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 127], 95.00th=[ 165], 00:28:55.999 | 99.00th=[ 245], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 279], 00:28:55.999 | 99.99th=[ 300] 00:28:55.999 bw ( KiB/s): min=83968, max=296960, per=10.86%, avg=206652.90, stdev=68481.81, samples=20 00:28:55.999 iops : min= 328, max= 1160, avg=807.15, stdev=267.51, samples=20 00:28:55.999 lat (msec) : 10=0.96%, 20=2.42%, 50=14.87%, 100=61.71%, 250=19.28% 00:28:55.999 lat (msec) : 500=0.76% 00:28:55.999 cpu : usr=0.40%, sys=3.48%, ctx=1768, majf=0, minf=4097 00:28:55.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:56.000 issued rwts: total=8137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.000 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:56.000 job7: (groupid=0, jobs=1): err= 0: pid=1517387: Tue Jun 11 13:56:47 2024 00:28:56.000 read: IOPS=868, BW=217MiB/s (228MB/s)(2188MiB/10073msec) 00:28:56.000 slat (usec): min=11, max=172075, avg=612.59, stdev=4341.04 00:28:56.000 clat (usec): min=1392, max=332787, avg=72988.20, stdev=65385.65 00:28:56.000 lat (usec): min=1423, max=370106, avg=73600.79, stdev=65956.55 00:28:56.000 clat percentiles (msec): 00:28:56.000 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 21], 00:28:56.000 | 30.00th=[ 25], 40.00th=[ 33], 50.00th=[ 49], 60.00th=[ 72], 00:28:56.000 | 70.00th=[ 91], 80.00th=[ 122], 90.00th=[ 188], 95.00th=[ 222], 00:28:56.000 | 99.00th=[ 251], 99.50th=[ 259], 99.90th=[ 279], 99.95th=[ 279], 00:28:56.000 | 99.99th=[ 334] 00:28:56.000 bw ( KiB/s): min=75264, max=512000, per=11.69%, avg=222355.80, stdev=118794.64, samples=20 00:28:56.000 iops : min= 294, max= 2000, avg=868.55, stdev=464.02, samples=20 00:28:56.000 lat (msec) : 2=0.05%, 4=0.95%, 10=4.58%, 20=14.27%, 50=30.97% 00:28:56.000 lat (msec) : 100=23.45%, 250=24.72%, 500=1.01% 00:28:56.000 cpu : usr=0.45%, sys=3.44%, ctx=2058, majf=0, minf=4097 00:28:56.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:56.000 issued rwts: total=8750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.000 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:56.000 job8: (groupid=0, jobs=1): err= 0: pid=1517425: Tue Jun 11 13:56:47 2024 00:28:56.000 read: IOPS=602, BW=151MiB/s (158MB/s)(1519MiB/10089msec) 00:28:56.000 slat (usec): min=13, max=130239, avg=1270.40, stdev=5427.57 00:28:56.000 clat (usec): min=1359, max=322671, avg=104887.80, stdev=64829.50 00:28:56.000 lat (usec): min=1409, max=363996, avg=106158.20, stdev=65886.81 00:28:56.000 clat percentiles (msec): 00:28:56.000 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 53], 00:28:56.000 | 30.00th=[ 68], 40.00th=[ 79], 50.00th=[ 90], 60.00th=[ 102], 00:28:56.000 | 70.00th=[ 124], 80.00th=[ 171], 90.00th=[ 218], 95.00th=[ 230], 00:28:56.000 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 268], 99.95th=[ 305], 00:28:56.000 | 99.99th=[ 321] 00:28:56.000 bw ( KiB/s): min=67072, max=244736, per=8.09%, avg=153874.70, stdev=53202.28, samples=20 00:28:56.000 iops : min= 262, max= 956, avg=601.00, stdev=207.84, samples=20 00:28:56.000 lat (msec) : 2=0.08%, 4=0.30%, 10=1.84%, 20=4.86%, 50=11.57% 00:28:56.000 lat (msec) : 100=40.30%, 250=40.56%, 500=0.49% 00:28:56.000 cpu : usr=0.30%, sys=2.51%, ctx=1508, majf=0, minf=4097 00:28:56.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:56.000 issued rwts: total=6075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.000 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:56.000 job9: (groupid=0, jobs=1): err= 0: pid=1517445: Tue Jun 11 13:56:47 2024 00:28:56.000 read: IOPS=542, BW=136MiB/s (142MB/s)(1369MiB/10082msec) 00:28:56.000 slat (usec): min=12, max=208613, avg=1422.22, stdev=6303.90 00:28:56.000 clat (msec): min=2, max=294, avg=116.31, stdev=69.91 00:28:56.000 lat (msec): min=2, max=436, avg=117.74, stdev=71.11 00:28:56.000 clat percentiles (msec): 00:28:56.000 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 48], 00:28:56.000 | 30.00th=[ 80], 40.00th=[ 93], 50.00th=[ 106], 60.00th=[ 126], 00:28:56.000 | 70.00th=[ 153], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 234], 00:28:56.000 | 99.00th=[ 259], 99.50th=[ 284], 99.90th=[ 288], 99.95th=[ 288], 00:28:56.000 | 99.99th=[ 296] 00:28:56.000 bw ( KiB/s): min=69632, max=290816, per=7.28%, avg=138511.80, stdev=63007.43, samples=20 00:28:56.000 iops : min= 272, max= 1136, avg=541.05, stdev=246.13, samples=20 00:28:56.000 lat (msec) : 4=0.27%, 10=3.53%, 20=5.83%, 50=11.13%, 100=25.96% 00:28:56.000 lat (msec) : 250=51.48%, 500=1.81% 00:28:56.000 cpu : usr=0.27%, sys=2.19%, ctx=1394, majf=0, minf=4097 00:28:56.000 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:56.000 issued rwts: total=5474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.000 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:56.000 job10: (groupid=0, jobs=1): err= 0: pid=1517461: Tue Jun 11 13:56:47 2024 00:28:56.000 read: IOPS=755, BW=189MiB/s (198MB/s)(1902MiB/10074msec) 00:28:56.000 slat (usec): min=14, max=167850, avg=1145.53, stdev=5725.07 00:28:56.000 clat (msec): min=2, max=373, avg=83.52, stdev=63.85 00:28:56.000 lat (msec): min=2, max=373, avg=84.67, stdev=64.92 00:28:56.000 clat percentiles (msec): 00:28:56.000 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 26], 20.00th=[ 34], 00:28:56.000 | 30.00th=[ 45], 40.00th=[ 52], 50.00th=[ 62], 60.00th=[ 77], 00:28:56.000 | 70.00th=[ 99], 80.00th=[ 117], 90.00th=[ 205], 95.00th=[ 230], 00:28:56.000 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 355], 00:28:56.000 | 99.99th=[ 376] 00:28:56.000 bw ( KiB/s): min=63488, max=400384, per=10.15%, avg=193097.80, stdev=101149.70, samples=20 00:28:56.000 iops : min= 248, max= 1564, avg=754.25, stdev=395.08, samples=20 00:28:56.000 lat (msec) : 4=0.04%, 10=2.04%, 20=4.02%, 50=31.85%, 100=32.94% 00:28:56.000 lat (msec) : 250=27.84%, 500=1.26% 00:28:56.001 cpu : usr=0.39%, sys=3.42%, ctx=1605, majf=0, minf=4097 00:28:56.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:56.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:56.001 issued rwts: total=7607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:56.001 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:56.001 00:28:56.001 Run status group 0 (all jobs): 00:28:56.001 READ: bw=1858MiB/s (1948MB/s), 113MiB/s-234MiB/s (118MB/s-245MB/s), io=18.3GiB (19.7GB), run=10066-10096msec 00:28:56.001 00:28:56.001 Disk stats (read/write): 00:28:56.001 nvme0n1: ios=18487/0, merge=0/0, ticks=1231765/0, in_queue=1231765, util=94.45% 00:28:56.001 nvme10n1: ios=10075/0, merge=0/0, ticks=1221804/0, in_queue=1221804, util=94.87% 00:28:56.001 nvme1n1: ios=8843/0, merge=0/0, ticks=1227672/0, in_queue=1227672, util=95.54% 00:28:56.001 nvme2n1: ios=14240/0, merge=0/0, ticks=1231292/0, in_queue=1231292, util=95.90% 00:28:56.001 nvme3n1: ios=11250/0, merge=0/0, ticks=1226914/0, in_queue=1226914, util=96.13% 00:28:56.001 nvme4n1: ios=13252/0, merge=0/0, ticks=1231189/0, in_queue=1231189, util=96.88% 00:28:56.001 nvme5n1: ios=16036/0, merge=0/0, ticks=1230096/0, in_queue=1230096, util=97.25% 00:28:56.001 nvme6n1: ios=17197/0, merge=0/0, ticks=1231076/0, in_queue=1231076, util=97.50% 00:28:56.001 nvme7n1: ios=11896/0, merge=0/0, ticks=1226644/0, in_queue=1226644, util=98.49% 00:28:56.001 nvme8n1: ios=10588/0, merge=0/0, ticks=1224310/0, in_queue=1224310, util=98.91% 00:28:56.001 nvme9n1: ios=14936/0, merge=0/0, ticks=1227686/0, in_queue=1227686, util=99.26% 00:28:56.001 13:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:56.001 [global] 00:28:56.001 thread=1 00:28:56.001 invalidate=1 00:28:56.001 rw=randwrite 00:28:56.001 time_based=1 00:28:56.001 runtime=10 00:28:56.001 ioengine=libaio 00:28:56.001 direct=1 00:28:56.001 bs=262144 00:28:56.001 iodepth=64 00:28:56.001 norandommap=1 00:28:56.001 numjobs=1 00:28:56.001 00:28:56.001 [job0] 00:28:56.001 filename=/dev/nvme0n1 00:28:56.001 [job1] 00:28:56.001 filename=/dev/nvme10n1 00:28:56.001 [job2] 00:28:56.001 filename=/dev/nvme1n1 00:28:56.001 [job3] 00:28:56.001 filename=/dev/nvme2n1 00:28:56.001 [job4] 00:28:56.001 filename=/dev/nvme3n1 00:28:56.001 [job5] 00:28:56.001 filename=/dev/nvme4n1 00:28:56.001 [job6] 00:28:56.001 filename=/dev/nvme5n1 00:28:56.001 [job7] 00:28:56.001 filename=/dev/nvme6n1 00:28:56.001 [job8] 00:28:56.001 filename=/dev/nvme7n1 00:28:56.001 [job9] 00:28:56.001 filename=/dev/nvme8n1 00:28:56.001 [job10] 00:28:56.001 filename=/dev/nvme9n1 00:28:56.001 Could not set queue depth (nvme0n1) 00:28:56.001 Could not set queue depth (nvme10n1) 00:28:56.001 Could not set queue depth (nvme1n1) 00:28:56.001 Could not set queue depth (nvme2n1) 00:28:56.001 Could not set queue depth (nvme3n1) 00:28:56.001 Could not set queue depth (nvme4n1) 00:28:56.001 Could not set queue depth (nvme5n1) 00:28:56.001 Could not set queue depth (nvme6n1) 00:28:56.001 Could not set queue depth (nvme7n1) 00:28:56.001 Could not set queue depth (nvme8n1) 00:28:56.001 Could not set queue depth (nvme9n1) 00:28:56.001 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:56.001 fio-3.35 00:28:56.001 Starting 11 threads 00:29:05.997 00:29:05.997 job0: (groupid=0, jobs=1): err= 0: pid=1518988: Tue Jun 11 13:56:58 2024 00:29:05.997 write: IOPS=447, BW=112MiB/s (117MB/s)(1135MiB/10147msec); 0 zone resets 00:29:05.997 slat (usec): min=21, max=63775, avg=1712.37, stdev=4243.40 00:29:05.997 clat (msec): min=2, max=301, avg=141.27, stdev=60.96 00:29:05.997 lat (msec): min=2, max=301, avg=142.99, stdev=61.84 00:29:05.997 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 10], 5.00th=[ 34], 10.00th=[ 56], 20.00th=[ 91], 00:29:05.998 | 30.00th=[ 113], 40.00th=[ 133], 50.00th=[ 142], 60.00th=[ 159], 00:29:05.998 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 226], 95.00th=[ 239], 00:29:05.998 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 300], 00:29:05.998 | 99.99th=[ 300] 00:29:05.998 bw ( KiB/s): min=71536, max=203264, per=7.77%, avg=114628.00, stdev=32413.24, samples=20 00:29:05.998 iops : min= 279, max= 794, avg=447.35, stdev=126.77, samples=20 00:29:05.998 lat (msec) : 4=0.07%, 10=1.08%, 20=1.78%, 50=5.42%, 100=17.05% 00:29:05.998 lat (msec) : 250=71.89%, 500=2.71% 00:29:05.998 cpu : usr=1.15%, sys=1.66%, ctx=2361, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,4540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job1: (groupid=0, jobs=1): err= 0: pid=1519000: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=586, BW=147MiB/s (154MB/s)(1486MiB/10145msec); 0 zone resets 00:29:05.998 slat (usec): min=23, max=189328, avg=1293.39, stdev=4150.44 00:29:05.998 clat (usec): min=1300, max=312143, avg=107838.38, stdev=63506.77 00:29:05.998 lat (usec): min=1347, max=312220, avg=109131.77, stdev=64244.45 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 51], 00:29:05.998 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 107], 60.00th=[ 129], 00:29:05.998 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 199], 95.00th=[ 232], 00:29:05.998 | 99.00th=[ 262], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 309], 00:29:05.998 | 99.99th=[ 313] 00:29:05.998 bw ( KiB/s): min=69771, max=244736, per=10.22%, avg=150666.30, stdev=55365.23, samples=20 00:29:05.998 iops : min= 272, max= 956, avg=588.10, stdev=216.23, samples=20 00:29:05.998 lat (msec) : 2=0.50%, 4=0.72%, 10=2.72%, 20=4.17%, 50=11.93% 00:29:05.998 lat (msec) : 100=28.09%, 250=49.91%, 500=1.95% 00:29:05.998 cpu : usr=1.73%, sys=2.32%, ctx=3011, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,5945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job2: (groupid=0, jobs=1): err= 0: pid=1519001: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=439, BW=110MiB/s (115MB/s)(1110MiB/10100msec); 0 zone resets 00:29:05.998 slat (usec): min=23, max=59138, avg=1794.57, stdev=4245.91 00:29:05.998 clat (usec): min=1749, max=276940, avg=143769.34, stdev=58643.94 00:29:05.998 lat (usec): min=1802, max=277003, avg=145563.92, stdev=59542.93 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 51], 20.00th=[ 102], 00:29:05.998 | 30.00th=[ 120], 40.00th=[ 142], 50.00th=[ 157], 60.00th=[ 165], 00:29:05.998 | 70.00th=[ 176], 80.00th=[ 197], 90.00th=[ 213], 95.00th=[ 224], 00:29:05.998 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 275], 99.95th=[ 279], 00:29:05.998 | 99.99th=[ 279] 00:29:05.998 bw ( KiB/s): min=71823, max=178533, per=7.60%, avg=112126.60, stdev=29186.00, samples=20 00:29:05.998 iops : min= 280, max= 697, avg=437.65, stdev=114.07, samples=20 00:29:05.998 lat (msec) : 2=0.05%, 4=0.18%, 10=1.87%, 20=2.95%, 50=4.87% 00:29:05.998 lat (msec) : 100=9.57%, 250=79.50%, 500=1.01% 00:29:05.998 cpu : usr=1.04%, sys=1.49%, ctx=2241, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,4439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job3: (groupid=0, jobs=1): err= 0: pid=1519002: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=540, BW=135MiB/s (142MB/s)(1357MiB/10044msec); 0 zone resets 00:29:05.998 slat (usec): min=24, max=55048, avg=1519.27, stdev=3626.02 00:29:05.998 clat (msec): min=2, max=284, avg=116.82, stdev=59.21 00:29:05.998 lat (msec): min=2, max=286, avg=118.33, stdev=60.06 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 45], 20.00th=[ 50], 00:29:05.998 | 30.00th=[ 74], 40.00th=[ 97], 50.00th=[ 123], 60.00th=[ 142], 00:29:05.998 | 70.00th=[ 159], 80.00th=[ 171], 90.00th=[ 194], 95.00th=[ 205], 00:29:05.998 | 99.00th=[ 224], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 284], 00:29:05.998 | 99.99th=[ 284] 00:29:05.998 bw ( KiB/s): min=73875, max=333979, per=9.32%, avg=137485.50, stdev=57161.03, samples=20 00:29:05.998 iops : min= 288, max= 1304, avg=536.65, stdev=223.24, samples=20 00:29:05.998 lat (msec) : 4=0.07%, 10=1.01%, 20=2.78%, 50=17.59%, 100=19.47% 00:29:05.998 lat (msec) : 250=58.65%, 500=0.42% 00:29:05.998 cpu : usr=1.13%, sys=1.94%, ctx=2459, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,5429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job4: (groupid=0, jobs=1): err= 0: pid=1519003: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=480, BW=120MiB/s (126MB/s)(1213MiB/10101msec); 0 zone resets 00:29:05.998 slat (usec): min=24, max=43570, avg=1628.99, stdev=4052.51 00:29:05.998 clat (usec): min=1992, max=300519, avg=131556.35, stdev=66249.31 00:29:05.998 lat (msec): min=2, max=307, avg=133.19, stdev=67.30 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 71], 00:29:05.998 | 30.00th=[ 108], 40.00th=[ 118], 50.00th=[ 127], 60.00th=[ 150], 00:29:05.998 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 222], 95.00th=[ 243], 00:29:05.998 | 99.00th=[ 264], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 296], 00:29:05.998 | 99.99th=[ 300] 00:29:05.998 bw ( KiB/s): min=67719, max=236071, per=8.32%, avg=122678.30, stdev=49923.98, samples=20 00:29:05.998 iops : min= 264, max= 922, avg=478.95, stdev=195.11, samples=20 00:29:05.998 lat (msec) : 2=0.02%, 4=0.12%, 10=1.69%, 20=4.74%, 50=9.21% 00:29:05.998 lat (msec) : 100=10.53%, 250=71.00%, 500=2.68% 00:29:05.998 cpu : usr=1.09%, sys=1.58%, ctx=2531, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,4852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job5: (groupid=0, jobs=1): err= 0: pid=1519005: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=528, BW=132MiB/s (139MB/s)(1338MiB/10118msec); 0 zone resets 00:29:05.998 slat (usec): min=23, max=83981, avg=1636.26, stdev=3566.68 00:29:05.998 clat (msec): min=9, max=239, avg=119.00, stdev=39.68 00:29:05.998 lat (msec): min=11, max=247, avg=120.64, stdev=40.20 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 26], 5.00th=[ 50], 10.00th=[ 77], 20.00th=[ 90], 00:29:05.998 | 30.00th=[ 96], 40.00th=[ 103], 50.00th=[ 118], 60.00th=[ 127], 00:29:05.998 | 70.00th=[ 138], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 186], 00:29:05.998 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 241], 00:29:05.998 | 99.99th=[ 241] 00:29:05.998 bw ( KiB/s): min=86188, max=192127, per=9.18%, avg=135438.30, stdev=31256.19, samples=20 00:29:05.998 iops : min= 336, max= 750, avg=528.65, stdev=122.18, samples=20 00:29:05.998 lat (msec) : 10=0.04%, 20=0.37%, 50=4.71%, 100=34.07%, 250=60.80% 00:29:05.998 cpu : usr=1.41%, sys=1.94%, ctx=2102, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,5350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job6: (groupid=0, jobs=1): err= 0: pid=1519006: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=597, BW=149MiB/s (157MB/s)(1515MiB/10147msec); 0 zone resets 00:29:05.998 slat (usec): min=23, max=84341, avg=1226.57, stdev=3174.67 00:29:05.998 clat (usec): min=1327, max=298830, avg=105874.54, stdev=56150.15 00:29:05.998 lat (usec): min=1378, max=298874, avg=107101.12, stdev=56693.92 00:29:05.998 clat percentiles (msec): 00:29:05.998 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 44], 00:29:05.998 | 30.00th=[ 70], 40.00th=[ 90], 50.00th=[ 109], 60.00th=[ 122], 00:29:05.998 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 213], 00:29:05.998 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 288], 00:29:05.998 | 99.99th=[ 300] 00:29:05.998 bw ( KiB/s): min=73875, max=349883, per=10.42%, avg=153657.45, stdev=65542.48, samples=20 00:29:05.998 iops : min= 288, max= 1366, avg=599.90, stdev=255.92, samples=20 00:29:05.998 lat (msec) : 2=0.08%, 4=0.18%, 10=0.99%, 20=2.59%, 50=18.73% 00:29:05.998 lat (msec) : 100=22.21%, 250=54.78%, 500=0.45% 00:29:05.998 cpu : usr=1.37%, sys=1.97%, ctx=2948, majf=0, minf=1 00:29:05.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:05.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.998 issued rwts: total=0,6061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.998 job7: (groupid=0, jobs=1): err= 0: pid=1519007: Tue Jun 11 13:56:58 2024 00:29:05.998 write: IOPS=559, BW=140MiB/s (147MB/s)(1412MiB/10100msec); 0 zone resets 00:29:05.998 slat (usec): min=24, max=77547, avg=1547.28, stdev=3293.20 00:29:05.998 clat (msec): min=2, max=251, avg=112.85, stdev=41.81 00:29:05.999 lat (msec): min=2, max=252, avg=114.40, stdev=42.38 00:29:05.999 clat percentiles (msec): 00:29:05.999 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 68], 20.00th=[ 75], 00:29:05.999 | 30.00th=[ 87], 40.00th=[ 103], 50.00th=[ 112], 60.00th=[ 127], 00:29:05.999 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 176], 00:29:05.999 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 247], 00:29:05.999 | 99.99th=[ 253] 00:29:05.999 bw ( KiB/s): min=70797, max=259072, per=9.70%, avg=143059.10, stdev=44816.86, samples=20 00:29:05.999 iops : min= 276, max= 1012, avg=558.50, stdev=175.09, samples=20 00:29:05.999 lat (msec) : 4=0.05%, 10=0.21%, 20=1.65%, 50=4.68%, 100=31.77% 00:29:05.999 lat (msec) : 250=61.63%, 500=0.02% 00:29:05.999 cpu : usr=1.67%, sys=2.07%, ctx=2208, majf=0, minf=1 00:29:05.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:29:05.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.999 issued rwts: total=0,5647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.999 job8: (groupid=0, jobs=1): err= 0: pid=1519008: Tue Jun 11 13:56:58 2024 00:29:05.999 write: IOPS=709, BW=177MiB/s (186MB/s)(1794MiB/10118msec); 0 zone resets 00:29:05.999 slat (usec): min=24, max=73886, avg=1144.79, stdev=2641.01 00:29:05.999 clat (msec): min=2, max=266, avg=89.05, stdev=43.72 00:29:05.999 lat (msec): min=4, max=270, avg=90.19, stdev=44.23 00:29:05.999 clat percentiles (msec): 00:29:05.999 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 40], 20.00th=[ 50], 00:29:05.999 | 30.00th=[ 55], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 97], 00:29:05.999 | 70.00th=[ 109], 80.00th=[ 126], 90.00th=[ 153], 95.00th=[ 167], 00:29:05.999 | 99.00th=[ 213], 99.50th=[ 236], 99.90th=[ 247], 99.95th=[ 259], 00:29:05.999 | 99.99th=[ 268] 00:29:05.999 bw ( KiB/s): min=108761, max=312432, per=12.36%, avg=182268.70, stdev=61137.26, samples=20 00:29:05.999 iops : min= 424, max= 1220, avg=711.75, stdev=238.81, samples=20 00:29:05.999 lat (msec) : 4=0.03%, 10=1.20%, 20=2.45%, 50=16.50%, 100=44.78% 00:29:05.999 lat (msec) : 250=34.96%, 500=0.08% 00:29:05.999 cpu : usr=1.71%, sys=2.33%, ctx=3026, majf=0, minf=1 00:29:05.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:05.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.999 issued rwts: total=0,7177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.999 job9: (groupid=0, jobs=1): err= 0: pid=1519009: Tue Jun 11 13:56:58 2024 00:29:05.999 write: IOPS=440, BW=110MiB/s (115MB/s)(1117MiB/10145msec); 0 zone resets 00:29:05.999 slat (usec): min=20, max=83662, avg=1995.12, stdev=4714.22 00:29:05.999 clat (usec): min=1762, max=300555, avg=143185.98, stdev=65980.15 00:29:05.999 lat (usec): min=1829, max=300601, avg=145181.10, stdev=66917.97 00:29:05.999 clat percentiles (msec): 00:29:05.999 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 69], 00:29:05.999 | 30.00th=[ 109], 40.00th=[ 138], 50.00th=[ 159], 60.00th=[ 169], 00:29:05.999 | 70.00th=[ 188], 80.00th=[ 203], 90.00th=[ 224], 95.00th=[ 236], 00:29:05.999 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 292], 99.95th=[ 292], 00:29:05.999 | 99.99th=[ 300] 00:29:05.999 bw ( KiB/s): min=67719, max=227328, per=7.65%, avg=112864.85, stdev=48731.75, samples=20 00:29:05.999 iops : min= 264, max= 888, avg=440.50, stdev=190.46, samples=20 00:29:05.999 lat (msec) : 2=0.02%, 4=0.09%, 10=0.90%, 20=2.69%, 50=9.67% 00:29:05.999 lat (msec) : 100=14.68%, 250=70.69%, 500=1.28% 00:29:05.999 cpu : usr=0.99%, sys=1.50%, ctx=1814, majf=0, minf=1 00:29:05.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:05.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.999 issued rwts: total=0,4469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.999 job10: (groupid=0, jobs=1): err= 0: pid=1519010: Tue Jun 11 13:56:58 2024 00:29:05.999 write: IOPS=449, BW=112MiB/s (118MB/s)(1138MiB/10119msec); 0 zone resets 00:29:05.999 slat (usec): min=23, max=66019, avg=1702.48, stdev=4341.80 00:29:05.999 clat (msec): min=4, max=308, avg=140.48, stdev=62.95 00:29:05.999 lat (msec): min=4, max=308, avg=142.19, stdev=63.87 00:29:05.999 clat percentiles (msec): 00:29:05.999 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 71], 00:29:05.999 | 30.00th=[ 122], 40.00th=[ 138], 50.00th=[ 155], 60.00th=[ 165], 00:29:05.999 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 213], 95.00th=[ 224], 00:29:05.999 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 305], 00:29:05.999 | 99.99th=[ 309] 00:29:05.999 bw ( KiB/s): min=67719, max=194949, per=7.80%, avg=115023.55, stdev=31485.99, samples=20 00:29:05.999 iops : min= 264, max= 761, avg=449.00, stdev=123.03, samples=20 00:29:05.999 lat (msec) : 10=0.55%, 20=2.64%, 50=10.39%, 100=12.28%, 250=72.08% 00:29:05.999 lat (msec) : 500=2.06% 00:29:05.999 cpu : usr=1.10%, sys=1.51%, ctx=2402, majf=0, minf=1 00:29:05.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:05.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:05.999 issued rwts: total=0,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.999 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:05.999 00:29:05.999 Run status group 0 (all jobs): 00:29:05.999 WRITE: bw=1440MiB/s (1510MB/s), 110MiB/s-177MiB/s (115MB/s-186MB/s), io=14.3GiB (15.3GB), run=10044-10147msec 00:29:05.999 00:29:05.999 Disk stats (read/write): 00:29:05.999 nvme0n1: ios=49/8960, merge=0/0, ticks=46/1225141, in_queue=1225187, util=95.94% 00:29:05.999 nvme10n1: ios=47/11768, merge=0/0, ticks=1517/1199182, in_queue=1200699, util=100.00% 00:29:05.999 nvme1n1: ios=0/8760, merge=0/0, ticks=0/1226681, in_queue=1226681, util=96.50% 00:29:05.999 nvme2n1: ios=45/10744, merge=0/0, ticks=2486/1227697, in_queue=1230183, util=100.00% 00:29:05.999 nvme3n1: ios=0/9584, merge=0/0, ticks=0/1226140, in_queue=1226140, util=96.89% 00:29:05.999 nvme4n1: ios=41/10579, merge=0/0, ticks=2269/1213065, in_queue=1215334, util=100.00% 00:29:05.999 nvme5n1: ios=0/11996, merge=0/0, ticks=0/1226564, in_queue=1226564, util=97.69% 00:29:05.999 nvme6n1: ios=43/11177, merge=0/0, ticks=1496/1222344, in_queue=1223840, util=100.00% 00:29:05.999 nvme7n1: ios=0/14234, merge=0/0, ticks=0/1224533, in_queue=1224533, util=98.49% 00:29:05.999 nvme8n1: ios=38/8816, merge=0/0, ticks=585/1216690, in_queue=1217275, util=100.00% 00:29:05.999 nvme9n1: ios=0/8984, merge=0/0, ticks=0/1226836, in_queue=1226836, util=99.04% 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:05.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:05.999 13:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:06.568 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:06.568 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:06.828 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:06.828 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:07.088 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:07.088 13:56:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:07.348 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:07.348 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:29:07.608 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:29:07.608 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:07.608 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.869 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:29:07.870 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:07.870 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:29:08.131 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:08.131 13:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:29:08.391 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:29:08.391 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:29:08.391 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.651 rmmod nvme_tcp 00:29:08.651 rmmod nvme_fabrics 00:29:08.651 rmmod nvme_keyring 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1510180 ']' 00:29:08.651 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1510180 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 1510180 ']' 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 1510180 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1510180 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1510180' 00:29:08.652 killing process with pid 1510180 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 1510180 00:29:08.652 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 1510180 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:09.222 13:57:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.129 13:57:03 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:11.129 00:29:11.129 real 1m15.444s 00:29:11.129 user 4m36.650s 00:29:11.129 sys 0m26.335s 00:29:11.129 13:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:11.129 13:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:11.129 ************************************ 00:29:11.129 END TEST nvmf_multiconnection 00:29:11.129 ************************************ 00:29:11.129 13:57:04 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:11.129 13:57:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:11.129 13:57:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:11.129 13:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.389 ************************************ 00:29:11.389 START TEST nvmf_initiator_timeout 00:29:11.389 ************************************ 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:11.389 * Looking for test storage... 00:29:11.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:29:11.389 13:57:04 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.961 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:17.962 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:17.962 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:17.962 Found net devices under 0000:af:00.0: cvl_0_0 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:17.962 Found net devices under 0000:af:00.1: cvl_0_1 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:29:17.962 00:29:17.962 --- 10.0.0.2 ping statistics --- 00:29:17.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.962 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:29:17.962 00:29:17.962 --- 10.0.0.1 ping statistics --- 00:29:17.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.962 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:17.962 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1525136 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1525136 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 1525136 ']' 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:17.963 13:57:10 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:18.222 [2024-06-11 13:57:10.871361] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:29:18.222 [2024-06-11 13:57:10.871421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.222 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.222 [2024-06-11 13:57:10.968526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.222 [2024-06-11 13:57:11.056296] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.222 [2024-06-11 13:57:11.056341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.222 [2024-06-11 13:57:11.056354] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.222 [2024-06-11 13:57:11.056366] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.222 [2024-06-11 13:57:11.056376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.222 [2024-06-11 13:57:11.056442] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.222 [2024-06-11 13:57:11.056461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.222 [2024-06-11 13:57:11.056580] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.222 [2024-06-11 13:57:11.056582] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 Malloc0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 Delay0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 [2024-06-11 13:57:11.874488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.159 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:19.160 [2024-06-11 13:57:11.902789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.160 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.160 13:57:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:20.535 13:57:13 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:29:20.535 13:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:29:20.535 13:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:29:20.535 13:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:29:20.535 13:57:13 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1525847 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:29:22.436 13:57:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:29:22.436 [global] 00:29:22.436 thread=1 00:29:22.436 invalidate=1 00:29:22.436 rw=write 00:29:22.436 time_based=1 00:29:22.436 runtime=60 00:29:22.436 ioengine=libaio 00:29:22.436 direct=1 00:29:22.436 bs=4096 00:29:22.436 iodepth=1 00:29:22.436 norandommap=0 00:29:22.436 numjobs=1 00:29:22.436 00:29:22.436 verify_dump=1 00:29:22.436 verify_backlog=512 00:29:22.436 verify_state_save=0 00:29:22.436 do_verify=1 00:29:22.436 verify=crc32c-intel 00:29:22.436 [job0] 00:29:22.436 filename=/dev/nvme0n1 00:29:22.436 Could not set queue depth (nvme0n1) 00:29:22.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:22.695 fio-3.35 00:29:22.695 Starting 1 thread 00:29:26.011 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 true 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 true 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 true 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:26.012 true 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.012 13:57:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:28.546 true 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:28.546 true 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:28.546 true 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:28.546 true 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:28.546 13:57:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1525847 00:30:24.784 00:30:24.784 job0: (groupid=0, jobs=1): err= 0: pid=1526094: Tue Jun 11 13:58:15 2024 00:30:24.784 read: IOPS=95, BW=383KiB/s (392kB/s)(22.5MiB/60039msec) 00:30:24.784 slat (usec): min=8, max=13316, avg=15.20, stdev=248.01 00:30:24.784 clat (usec): min=289, max=41759k, avg=10119.23, stdev=550645.44 00:30:24.784 lat (usec): min=298, max=41759k, avg=10134.43, stdev=550645.64 00:30:24.784 clat percentiles (usec): 00:30:24.784 | 1.00th=[ 355], 5.00th=[ 379], 10.00th=[ 392], 00:30:24.784 | 20.00th=[ 420], 30.00th=[ 437], 40.00th=[ 449], 00:30:24.784 | 50.00th=[ 457], 60.00th=[ 461], 70.00th=[ 469], 00:30:24.784 | 80.00th=[ 482], 90.00th=[ 515], 95.00th=[ 41157], 00:30:24.784 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 42206], 00:30:24.784 | 99.95th=[ 42206], 99.99th=[17112761] 00:30:24.784 write: IOPS=102, BW=409KiB/s (419kB/s)(24.0MiB/60039msec); 0 zone resets 00:30:24.784 slat (usec): min=11, max=30859, avg=18.18, stdev=393.53 00:30:24.784 clat (usec): min=175, max=2915, avg=260.44, stdev=40.37 00:30:24.784 lat (usec): min=189, max=31222, avg=278.62, stdev=396.91 00:30:24.784 clat percentiles (usec): 00:30:24.784 | 1.00th=[ 196], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 247], 00:30:24.784 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 260], 60.00th=[ 265], 00:30:24.784 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:30:24.784 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 375], 99.95th=[ 412], 00:30:24.784 | 99.99th=[ 2900] 00:30:24.784 bw ( KiB/s): min= 1848, max= 7352, per=100.00%, avg=4468.36, stdev=1952.40, samples=11 00:30:24.784 iops : min= 462, max= 1838, avg=1117.09, stdev=488.10, samples=11 00:30:24.784 lat (usec) : 250=13.46%, 500=79.78%, 750=3.85%, 1000=0.01% 00:30:24.784 lat (msec) : 4=0.02%, 50=2.87%, >=2000=0.01% 00:30:24.784 cpu : usr=0.17%, sys=0.29%, ctx=11901, majf=0, minf=2 00:30:24.784 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.784 issued rwts: total=5752,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.784 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:24.784 00:30:24.784 Run status group 0 (all jobs): 00:30:24.784 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=22.5MiB (23.6MB), run=60039-60039msec 00:30:24.784 WRITE: bw=409KiB/s (419kB/s), 409KiB/s-409KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60039-60039msec 00:30:24.784 00:30:24.784 Disk stats (read/write): 00:30:24.784 nvme0n1: ios=5800/6144, merge=0/0, ticks=17518/1520, in_queue=19038, util=99.81% 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:24.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:24.784 nvmf hotplug test: fio successful as expected 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.784 13:58:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.784 rmmod nvme_tcp 00:30:24.784 rmmod nvme_fabrics 00:30:24.784 rmmod nvme_keyring 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1525136 ']' 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1525136 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 1525136 ']' 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 1525136 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1525136 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1525136' 00:30:24.784 killing process with pid 1525136 00:30:24.784 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 1525136 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 1525136 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.785 13:58:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.722 13:58:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.722 00:30:25.723 real 1m14.313s 00:30:25.723 user 4m30.324s 00:30:25.723 sys 0m8.999s 00:30:25.723 13:58:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:25.723 13:58:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:25.723 ************************************ 00:30:25.723 END TEST nvmf_initiator_timeout 00:30:25.723 ************************************ 00:30:25.723 13:58:18 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:30:25.723 13:58:18 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:30:25.723 13:58:18 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:30:25.723 13:58:18 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.723 13:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:32.293 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:32.293 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:32.293 Found net devices under 0000:af:00.0: cvl_0_0 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.293 13:58:25 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:32.294 Found net devices under 0000:af:00.1: cvl_0_1 00:30:32.294 13:58:25 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.294 13:58:25 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:32.294 13:58:25 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.294 13:58:25 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:30:32.294 13:58:25 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:32.294 13:58:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:32.294 13:58:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:32.294 13:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.294 ************************************ 00:30:32.294 START TEST nvmf_perf_adq 00:30:32.294 ************************************ 00:30:32.294 13:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:30:32.554 * Looking for test storage... 00:30:32.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.554 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:32.555 13:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:39.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:39.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:39.125 Found net devices under 0000:af:00.0: cvl_0_0 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:39.125 Found net devices under 0000:af:00.1: cvl_0_1 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:39.125 13:58:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:30:39.126 13:58:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:30:41.027 13:58:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:30:42.933 13:58:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.277 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:48.278 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:48.278 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:48.278 Found net devices under 0000:af:00.0: cvl_0_0 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:48.278 Found net devices under 0000:af:00.1: cvl_0_1 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:48.278 00:30:48.278 --- 10.0.0.2 ping statistics --- 00:30:48.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.278 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:30:48.278 00:30:48.278 --- 10.0.0.1 ping statistics --- 00:30:48.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.278 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1544332 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1544332 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1544332 ']' 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:48.278 13:58:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 [2024-06-11 13:58:41.032900] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:30:48.278 [2024-06-11 13:58:41.032959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.278 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.278 [2024-06-11 13:58:41.140592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:48.537 [2024-06-11 13:58:41.228107] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.537 [2024-06-11 13:58:41.228147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.537 [2024-06-11 13:58:41.228160] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.537 [2024-06-11 13:58:41.228172] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.537 [2024-06-11 13:58:41.228182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.537 [2024-06-11 13:58:41.228233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.537 [2024-06-11 13:58:41.228347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.537 [2024-06-11 13:58:41.228457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.537 [2024-06-11 13:58:41.228457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.104 13:58:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.104 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 [2024-06-11 13:58:42.147559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 Malloc1 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:49.363 [2024-06-11 13:58:42.199414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1544617 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:30:49.363 13:58:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:49.363 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:30:51.892 "tick_rate": 2500000000, 00:30:51.892 "poll_groups": [ 00:30:51.892 { 00:30:51.892 "name": "nvmf_tgt_poll_group_000", 00:30:51.892 "admin_qpairs": 1, 00:30:51.892 "io_qpairs": 1, 00:30:51.892 "current_admin_qpairs": 1, 00:30:51.892 "current_io_qpairs": 1, 00:30:51.892 "pending_bdev_io": 0, 00:30:51.892 "completed_nvme_io": 16508, 00:30:51.892 "transports": [ 00:30:51.892 { 00:30:51.892 "trtype": "TCP" 00:30:51.892 } 00:30:51.892 ] 00:30:51.892 }, 00:30:51.892 { 00:30:51.892 "name": "nvmf_tgt_poll_group_001", 00:30:51.892 "admin_qpairs": 0, 00:30:51.892 "io_qpairs": 1, 00:30:51.892 "current_admin_qpairs": 0, 00:30:51.892 "current_io_qpairs": 1, 00:30:51.892 "pending_bdev_io": 0, 00:30:51.892 "completed_nvme_io": 20024, 00:30:51.892 "transports": [ 00:30:51.892 { 00:30:51.892 "trtype": "TCP" 00:30:51.892 } 00:30:51.892 ] 00:30:51.892 }, 00:30:51.892 { 00:30:51.892 "name": "nvmf_tgt_poll_group_002", 00:30:51.892 "admin_qpairs": 0, 00:30:51.892 "io_qpairs": 1, 00:30:51.892 "current_admin_qpairs": 0, 00:30:51.892 "current_io_qpairs": 1, 00:30:51.892 "pending_bdev_io": 0, 00:30:51.892 "completed_nvme_io": 16941, 00:30:51.892 "transports": [ 00:30:51.892 { 00:30:51.892 "trtype": "TCP" 00:30:51.892 } 00:30:51.892 ] 00:30:51.892 }, 00:30:51.892 { 00:30:51.892 "name": "nvmf_tgt_poll_group_003", 00:30:51.892 "admin_qpairs": 0, 00:30:51.892 "io_qpairs": 1, 00:30:51.892 "current_admin_qpairs": 0, 00:30:51.892 "current_io_qpairs": 1, 00:30:51.892 "pending_bdev_io": 0, 00:30:51.892 "completed_nvme_io": 16376, 00:30:51.892 "transports": [ 00:30:51.892 { 00:30:51.892 "trtype": "TCP" 00:30:51.892 } 00:30:51.892 ] 00:30:51.892 } 00:30:51.892 ] 00:30:51.892 }' 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:30:51.892 13:58:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1544617 00:31:00.000 Initializing NVMe Controllers 00:31:00.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:00.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:00.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:00.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:00.000 Initialization complete. Launching workers. 00:31:00.000 ======================================================== 00:31:00.000 Latency(us) 00:31:00.000 Device Information : IOPS MiB/s Average min max 00:31:00.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8938.70 34.92 7159.94 2251.36 11872.57 00:31:00.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10614.80 41.46 6028.44 1855.13 10413.59 00:31:00.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8641.80 33.76 7406.80 2248.52 12040.01 00:31:00.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8728.00 34.09 7333.42 2466.43 12669.57 00:31:00.000 ======================================================== 00:31:00.000 Total : 36923.30 144.23 6933.44 1855.13 12669.57 00:31:00.000 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:00.000 rmmod nvme_tcp 00:31:00.000 rmmod nvme_fabrics 00:31:00.000 rmmod nvme_keyring 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1544332 ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1544332 ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1544332' 00:31:00.000 killing process with pid 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1544332 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.000 13:58:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.901 13:58:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:01.901 13:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:31:01.901 13:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:31:03.279 13:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:31:05.814 13:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:11.087 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:11.088 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:11.088 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:11.088 Found net devices under 0000:af:00.0: cvl_0_0 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:11.088 Found net devices under 0000:af:00.1: cvl_0_1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:11.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:31:11.088 00:31:11.088 --- 10.0.0.2 ping statistics --- 00:31:11.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.088 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:31:11.088 00:31:11.088 --- 10.0.0.1 ping statistics --- 00:31:11.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.088 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:31:11.088 net.core.busy_poll = 1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:31:11.088 net.core.busy_read = 1 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:31:11.088 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1548446 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1548446 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1548446 ']' 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:11.089 13:59:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.089 [2024-06-11 13:59:03.800822] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:11.089 [2024-06-11 13:59:03.800896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.089 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.089 [2024-06-11 13:59:03.908302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:11.089 [2024-06-11 13:59:03.996289] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.089 [2024-06-11 13:59:03.996334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.089 [2024-06-11 13:59:03.996347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.089 [2024-06-11 13:59:03.996363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.089 [2024-06-11 13:59:03.996373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.089 [2024-06-11 13:59:03.996437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.347 [2024-06-11 13:59:03.996557] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.347 [2024-06-11 13:59:03.996601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.347 [2024-06-11 13:59:03.996601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:11.913 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 [2024-06-11 13:59:04.902703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 Malloc1 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:12.172 [2024-06-11 13:59:04.950292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1548731 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:31:12.172 13:59:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:12.172 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.068 13:59:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:31:14.068 13:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:14.068 13:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 13:59:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:14.325 13:59:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:31:14.325 "tick_rate": 2500000000, 00:31:14.325 "poll_groups": [ 00:31:14.325 { 00:31:14.325 "name": "nvmf_tgt_poll_group_000", 00:31:14.325 "admin_qpairs": 1, 00:31:14.325 "io_qpairs": 1, 00:31:14.325 "current_admin_qpairs": 1, 00:31:14.325 "current_io_qpairs": 1, 00:31:14.325 "pending_bdev_io": 0, 00:31:14.325 "completed_nvme_io": 21775, 00:31:14.325 "transports": [ 00:31:14.325 { 00:31:14.325 "trtype": "TCP" 00:31:14.325 } 00:31:14.325 ] 00:31:14.325 }, 00:31:14.325 { 00:31:14.325 "name": "nvmf_tgt_poll_group_001", 00:31:14.325 "admin_qpairs": 0, 00:31:14.325 "io_qpairs": 3, 00:31:14.325 "current_admin_qpairs": 0, 00:31:14.325 "current_io_qpairs": 3, 00:31:14.325 "pending_bdev_io": 0, 00:31:14.325 "completed_nvme_io": 31173, 00:31:14.325 "transports": [ 00:31:14.325 { 00:31:14.325 "trtype": "TCP" 00:31:14.325 } 00:31:14.325 ] 00:31:14.325 }, 00:31:14.325 { 00:31:14.325 "name": "nvmf_tgt_poll_group_002", 00:31:14.325 "admin_qpairs": 0, 00:31:14.325 "io_qpairs": 0, 00:31:14.325 "current_admin_qpairs": 0, 00:31:14.325 "current_io_qpairs": 0, 00:31:14.325 "pending_bdev_io": 0, 00:31:14.325 "completed_nvme_io": 0, 00:31:14.325 "transports": [ 00:31:14.325 { 00:31:14.325 "trtype": "TCP" 00:31:14.325 } 00:31:14.325 ] 00:31:14.325 }, 00:31:14.325 { 00:31:14.325 "name": "nvmf_tgt_poll_group_003", 00:31:14.325 "admin_qpairs": 0, 00:31:14.325 "io_qpairs": 0, 00:31:14.325 "current_admin_qpairs": 0, 00:31:14.325 "current_io_qpairs": 0, 00:31:14.325 "pending_bdev_io": 0, 00:31:14.325 "completed_nvme_io": 0, 00:31:14.325 "transports": [ 00:31:14.325 { 00:31:14.325 "trtype": "TCP" 00:31:14.325 } 00:31:14.325 ] 00:31:14.325 } 00:31:14.325 ] 00:31:14.325 }' 00:31:14.325 13:59:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:31:14.325 13:59:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:31:14.325 13:59:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:31:14.325 13:59:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:31:14.325 13:59:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1548731 00:31:22.459 Initializing NVMe Controllers 00:31:22.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:22.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:22.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:22.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:22.459 Initialization complete. Launching workers. 00:31:22.459 ======================================================== 00:31:22.459 Latency(us) 00:31:22.459 Device Information : IOPS MiB/s Average min max 00:31:22.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11617.40 45.38 5509.45 1912.35 8058.88 00:31:22.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5330.70 20.82 12016.35 1872.13 58726.52 00:31:22.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5348.90 20.89 11993.29 1624.67 57205.07 00:31:22.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5802.10 22.66 11030.17 1515.16 55883.98 00:31:22.459 ======================================================== 00:31:22.459 Total : 28099.09 109.76 9118.09 1515.16 58726.52 00:31:22.459 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.459 rmmod nvme_tcp 00:31:22.459 rmmod nvme_fabrics 00:31:22.459 rmmod nvme_keyring 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1548446 ']' 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1548446 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1548446 ']' 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1548446 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1548446 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1548446' 00:31:22.459 killing process with pid 1548446 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1548446 00:31:22.459 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1548446 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.719 13:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.258 13:59:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.258 13:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:25.258 00:31:25.258 real 0m52.442s 00:31:25.258 user 2m47.380s 00:31:25.258 sys 0m14.288s 00:31:25.258 13:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:25.258 13:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:25.258 ************************************ 00:31:25.258 END TEST nvmf_perf_adq 00:31:25.258 ************************************ 00:31:25.258 13:59:17 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:25.258 13:59:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:25.258 13:59:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:25.258 13:59:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.258 ************************************ 00:31:25.258 START TEST nvmf_shutdown 00:31:25.258 ************************************ 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:31:25.258 * Looking for test storage... 00:31:25.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:25.258 ************************************ 00:31:25.258 START TEST nvmf_shutdown_tc1 00:31:25.258 ************************************ 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.258 13:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:31.831 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:31.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:31.831 Found net devices under 0000:af:00.0: cvl_0_0 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:31.831 Found net devices under 0000:af:00.1: cvl_0_1 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:31.831 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:31.832 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:31:32.091 00:31:32.091 --- 10.0.0.2 ping statistics --- 00:31:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.091 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:31:32.091 00:31:32.091 --- 10.0.0.1 ping statistics --- 00:31:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.091 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1554125 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1554125 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1554125 ']' 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:32.091 13:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:32.091 [2024-06-11 13:59:24.965495] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:32.092 [2024-06-11 13:59:24.965560] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.351 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.351 [2024-06-11 13:59:25.064577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.351 [2024-06-11 13:59:25.151819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.351 [2024-06-11 13:59:25.151860] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.351 [2024-06-11 13:59:25.151873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.351 [2024-06-11 13:59:25.151885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.351 [2024-06-11 13:59:25.151895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.351 [2024-06-11 13:59:25.152000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.352 [2024-06-11 13:59:25.152119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.352 [2024-06-11 13:59:25.152227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.352 [2024-06-11 13:59:25.152227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.289 [2024-06-11 13:59:25.929943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.289 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:33.290 13:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:31:33.290 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:33.290 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:33.290 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.290 Malloc1 00:31:33.290 [2024-06-11 13:59:26.041859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.290 Malloc2 00:31:33.290 Malloc3 00:31:33.290 Malloc4 00:31:33.290 Malloc5 00:31:33.549 Malloc6 00:31:33.549 Malloc7 00:31:33.549 Malloc8 00:31:33.549 Malloc9 00:31:33.549 Malloc10 00:31:33.549 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:33.549 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:33.549 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:33.549 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1554440 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1554440 /var/tmp/bdevperf.sock 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1554440 ']' 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:33.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.809 "trtype": "$TEST_TRANSPORT", 00:31:33.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.809 "adrfam": "ipv4", 00:31:33.809 "trsvcid": "$NVMF_PORT", 00:31:33.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.809 "hdgst": ${hdgst:-false}, 00:31:33.809 "ddgst": ${ddgst:-false} 00:31:33.809 }, 00:31:33.809 "method": "bdev_nvme_attach_controller" 00:31:33.809 } 00:31:33.809 EOF 00:31:33.809 )") 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.809 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.809 { 00:31:33.809 "params": { 00:31:33.809 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.810 { 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 [2024-06-11 13:59:26.538636] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:33.810 [2024-06-11 13:59:26.538687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.810 { 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.810 { 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.810 { 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:33.810 { 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme$subsystem", 00:31:33.810 "trtype": "$TEST_TRANSPORT", 00:31:33.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "$NVMF_PORT", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.810 "hdgst": ${hdgst:-false}, 00:31:33.810 "ddgst": ${ddgst:-false} 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 } 00:31:33.810 EOF 00:31:33.810 )") 00:31:33.810 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:33.810 13:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme1", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 },{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme2", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 },{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme3", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 },{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme4", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 },{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme5", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.810 "method": "bdev_nvme_attach_controller" 00:31:33.810 },{ 00:31:33.810 "params": { 00:31:33.810 "name": "Nvme6", 00:31:33.810 "trtype": "tcp", 00:31:33.810 "traddr": "10.0.0.2", 00:31:33.810 "adrfam": "ipv4", 00:31:33.810 "trsvcid": "4420", 00:31:33.810 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:33.810 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:33.810 "hdgst": false, 00:31:33.810 "ddgst": false 00:31:33.810 }, 00:31:33.811 "method": "bdev_nvme_attach_controller" 00:31:33.811 },{ 00:31:33.811 "params": { 00:31:33.811 "name": "Nvme7", 00:31:33.811 "trtype": "tcp", 00:31:33.811 "traddr": "10.0.0.2", 00:31:33.811 "adrfam": "ipv4", 00:31:33.811 "trsvcid": "4420", 00:31:33.811 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:33.811 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:33.811 "hdgst": false, 00:31:33.811 "ddgst": false 00:31:33.811 }, 00:31:33.811 "method": "bdev_nvme_attach_controller" 00:31:33.811 },{ 00:31:33.811 "params": { 00:31:33.811 "name": "Nvme8", 00:31:33.811 "trtype": "tcp", 00:31:33.811 "traddr": "10.0.0.2", 00:31:33.811 "adrfam": "ipv4", 00:31:33.811 "trsvcid": "4420", 00:31:33.811 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:33.811 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:33.811 "hdgst": false, 00:31:33.811 "ddgst": false 00:31:33.811 }, 00:31:33.811 "method": "bdev_nvme_attach_controller" 00:31:33.811 },{ 00:31:33.811 "params": { 00:31:33.811 "name": "Nvme9", 00:31:33.811 "trtype": "tcp", 00:31:33.811 "traddr": "10.0.0.2", 00:31:33.811 "adrfam": "ipv4", 00:31:33.811 "trsvcid": "4420", 00:31:33.811 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:33.811 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:33.811 "hdgst": false, 00:31:33.811 "ddgst": false 00:31:33.811 }, 00:31:33.811 "method": "bdev_nvme_attach_controller" 00:31:33.811 },{ 00:31:33.811 "params": { 00:31:33.811 "name": "Nvme10", 00:31:33.811 "trtype": "tcp", 00:31:33.811 "traddr": "10.0.0.2", 00:31:33.811 "adrfam": "ipv4", 00:31:33.811 "trsvcid": "4420", 00:31:33.811 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:33.811 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:33.811 "hdgst": false, 00:31:33.811 "ddgst": false 00:31:33.811 }, 00:31:33.811 "method": "bdev_nvme_attach_controller" 00:31:33.811 }' 00:31:33.811 [2024-06-11 13:59:26.633802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.811 [2024-06-11 13:59:26.713862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1554440 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:31:35.717 13:59:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:31:36.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1554440 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1554125 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.286 { 00:31:36.286 "params": { 00:31:36.286 "name": "Nvme$subsystem", 00:31:36.286 "trtype": "$TEST_TRANSPORT", 00:31:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.286 "adrfam": "ipv4", 00:31:36.286 "trsvcid": "$NVMF_PORT", 00:31:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.286 "hdgst": ${hdgst:-false}, 00:31:36.286 "ddgst": ${ddgst:-false} 00:31:36.286 }, 00:31:36.286 "method": "bdev_nvme_attach_controller" 00:31:36.286 } 00:31:36.286 EOF 00:31:36.286 )") 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.286 { 00:31:36.286 "params": { 00:31:36.286 "name": "Nvme$subsystem", 00:31:36.286 "trtype": "$TEST_TRANSPORT", 00:31:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.286 "adrfam": "ipv4", 00:31:36.286 "trsvcid": "$NVMF_PORT", 00:31:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.286 "hdgst": ${hdgst:-false}, 00:31:36.286 "ddgst": ${ddgst:-false} 00:31:36.286 }, 00:31:36.286 "method": "bdev_nvme_attach_controller" 00:31:36.286 } 00:31:36.286 EOF 00:31:36.286 )") 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.286 { 00:31:36.286 "params": { 00:31:36.286 "name": "Nvme$subsystem", 00:31:36.286 "trtype": "$TEST_TRANSPORT", 00:31:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.286 "adrfam": "ipv4", 00:31:36.286 "trsvcid": "$NVMF_PORT", 00:31:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.286 "hdgst": ${hdgst:-false}, 00:31:36.286 "ddgst": ${ddgst:-false} 00:31:36.286 }, 00:31:36.286 "method": "bdev_nvme_attach_controller" 00:31:36.286 } 00:31:36.286 EOF 00:31:36.286 )") 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.286 { 00:31:36.286 "params": { 00:31:36.286 "name": "Nvme$subsystem", 00:31:36.286 "trtype": "$TEST_TRANSPORT", 00:31:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.286 "adrfam": "ipv4", 00:31:36.286 "trsvcid": "$NVMF_PORT", 00:31:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.286 "hdgst": ${hdgst:-false}, 00:31:36.286 "ddgst": ${ddgst:-false} 00:31:36.286 }, 00:31:36.286 "method": "bdev_nvme_attach_controller" 00:31:36.286 } 00:31:36.286 EOF 00:31:36.286 )") 00:31:36.286 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.546 { 00:31:36.546 "params": { 00:31:36.546 "name": "Nvme$subsystem", 00:31:36.546 "trtype": "$TEST_TRANSPORT", 00:31:36.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.546 "adrfam": "ipv4", 00:31:36.546 "trsvcid": "$NVMF_PORT", 00:31:36.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.546 "hdgst": ${hdgst:-false}, 00:31:36.546 "ddgst": ${ddgst:-false} 00:31:36.546 }, 00:31:36.546 "method": "bdev_nvme_attach_controller" 00:31:36.546 } 00:31:36.546 EOF 00:31:36.546 )") 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.546 { 00:31:36.546 "params": { 00:31:36.546 "name": "Nvme$subsystem", 00:31:36.546 "trtype": "$TEST_TRANSPORT", 00:31:36.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.546 "adrfam": "ipv4", 00:31:36.546 "trsvcid": "$NVMF_PORT", 00:31:36.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.546 "hdgst": ${hdgst:-false}, 00:31:36.546 "ddgst": ${ddgst:-false} 00:31:36.546 }, 00:31:36.546 "method": "bdev_nvme_attach_controller" 00:31:36.546 } 00:31:36.546 EOF 00:31:36.546 )") 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.546 [2024-06-11 13:59:29.212663] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:36.546 [2024-06-11 13:59:29.212730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554962 ] 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.546 { 00:31:36.546 "params": { 00:31:36.546 "name": "Nvme$subsystem", 00:31:36.546 "trtype": "$TEST_TRANSPORT", 00:31:36.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.546 "adrfam": "ipv4", 00:31:36.546 "trsvcid": "$NVMF_PORT", 00:31:36.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.546 "hdgst": ${hdgst:-false}, 00:31:36.546 "ddgst": ${ddgst:-false} 00:31:36.546 }, 00:31:36.546 "method": "bdev_nvme_attach_controller" 00:31:36.546 } 00:31:36.546 EOF 00:31:36.546 )") 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.546 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.546 { 00:31:36.546 "params": { 00:31:36.546 "name": "Nvme$subsystem", 00:31:36.546 "trtype": "$TEST_TRANSPORT", 00:31:36.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.546 "adrfam": "ipv4", 00:31:36.546 "trsvcid": "$NVMF_PORT", 00:31:36.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.546 "hdgst": ${hdgst:-false}, 00:31:36.546 "ddgst": ${ddgst:-false} 00:31:36.546 }, 00:31:36.546 "method": "bdev_nvme_attach_controller" 00:31:36.546 } 00:31:36.546 EOF 00:31:36.546 )") 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.547 { 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme$subsystem", 00:31:36.547 "trtype": "$TEST_TRANSPORT", 00:31:36.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "$NVMF_PORT", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.547 "hdgst": ${hdgst:-false}, 00:31:36.547 "ddgst": ${ddgst:-false} 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 } 00:31:36.547 EOF 00:31:36.547 )") 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.547 { 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme$subsystem", 00:31:36.547 "trtype": "$TEST_TRANSPORT", 00:31:36.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "$NVMF_PORT", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.547 "hdgst": ${hdgst:-false}, 00:31:36.547 "ddgst": ${ddgst:-false} 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 } 00:31:36.547 EOF 00:31:36.547 )") 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:31:36.547 13:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme1", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme2", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme3", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme4", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme5", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme6", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme7", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme8", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme9", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 },{ 00:31:36.547 "params": { 00:31:36.547 "name": "Nvme10", 00:31:36.547 "trtype": "tcp", 00:31:36.547 "traddr": "10.0.0.2", 00:31:36.547 "adrfam": "ipv4", 00:31:36.547 "trsvcid": "4420", 00:31:36.547 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:36.547 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:36.547 "hdgst": false, 00:31:36.547 "ddgst": false 00:31:36.547 }, 00:31:36.547 "method": "bdev_nvme_attach_controller" 00:31:36.547 }' 00:31:36.547 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.547 [2024-06-11 13:59:29.317029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.547 [2024-06-11 13:59:29.399071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.927 Running I/O for 1 seconds... 00:31:39.306 00:31:39.306 Latency(us) 00:31:39.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.306 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.306 Verification LBA range: start 0x0 length 0x400 00:31:39.306 Nvme1n1 : 1.16 220.28 13.77 0.00 0.00 287416.93 20552.09 271790.90 00:31:39.306 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.306 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme2n1 : 1.19 215.00 13.44 0.00 0.00 289309.90 25690.11 271790.90 00:31:39.307 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme3n1 : 1.15 225.30 14.08 0.00 0.00 269209.39 12320.77 276824.06 00:31:39.307 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme4n1 : 1.17 218.98 13.69 0.00 0.00 273247.23 19713.23 271790.90 00:31:39.307 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme5n1 : 1.19 215.95 13.50 0.00 0.00 272479.64 17406.36 276824.06 00:31:39.307 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme6n1 : 1.20 212.83 13.30 0.00 0.00 271613.54 23278.39 276824.06 00:31:39.307 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme7n1 : 1.18 220.78 13.80 0.00 0.00 255593.42 3879.73 273468.62 00:31:39.307 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme8n1 : 1.19 214.40 13.40 0.00 0.00 259041.69 39636.17 256691.40 00:31:39.307 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme9n1 : 1.20 213.32 13.33 0.00 0.00 255244.49 22229.81 278501.79 00:31:39.307 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.307 Verification LBA range: start 0x0 length 0x400 00:31:39.307 Nvme10n1 : 1.21 211.77 13.24 0.00 0.00 252426.24 23278.39 305345.33 00:31:39.307 =================================================================================================================== 00:31:39.307 Total : 2168.61 135.54 0.00 0.00 268539.07 3879.73 305345.33 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:39.567 rmmod nvme_tcp 00:31:39.567 rmmod nvme_fabrics 00:31:39.567 rmmod nvme_keyring 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1554125 ']' 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1554125 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1554125 ']' 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1554125 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1554125 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1554125' 00:31:39.567 killing process with pid 1554125 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1554125 00:31:39.567 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1554125 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.135 13:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:42.042 00:31:42.042 real 0m17.025s 00:31:42.042 user 0m36.248s 00:31:42.042 sys 0m7.158s 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:42.042 ************************************ 00:31:42.042 END TEST nvmf_shutdown_tc1 00:31:42.042 ************************************ 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:42.042 ************************************ 00:31:42.042 START TEST nvmf_shutdown_tc2 00:31:42.042 ************************************ 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.042 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:42.302 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:42.302 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.302 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:42.303 Found net devices under 0000:af:00.0: cvl_0_0 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:42.303 Found net devices under 0000:af:00.1: cvl_0_1 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:42.303 13:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:42.303 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:42.303 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:42.303 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:42.303 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:42.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:42.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:31:42.563 00:31:42.563 --- 10.0.0.2 ping statistics --- 00:31:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.563 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:42.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:42.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:31:42.563 00:31:42.563 --- 10.0.0.1 ping statistics --- 00:31:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:42.563 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1556005 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1556005 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1556005 ']' 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:42.563 13:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.563 [2024-06-11 13:59:35.388721] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:42.563 [2024-06-11 13:59:35.388789] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.822 [2024-06-11 13:59:35.488544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:42.822 [2024-06-11 13:59:35.576724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.822 [2024-06-11 13:59:35.576766] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.822 [2024-06-11 13:59:35.576780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:42.822 [2024-06-11 13:59:35.576792] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:42.822 [2024-06-11 13:59:35.576803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.822 [2024-06-11 13:59:35.576911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.822 [2024-06-11 13:59:35.577029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:42.822 [2024-06-11 13:59:35.577136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.822 [2024-06-11 13:59:35.577136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:43.390 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:43.390 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.650 [2024-06-11 13:59:36.350828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:43.650 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:43.651 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:43.651 Malloc1 00:31:43.651 [2024-06-11 13:59:36.462676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.651 Malloc2 00:31:43.651 Malloc3 00:31:43.910 Malloc4 00:31:43.910 Malloc5 00:31:43.910 Malloc6 00:31:43.910 Malloc7 00:31:43.910 Malloc8 00:31:43.910 Malloc9 00:31:44.170 Malloc10 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1556305 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1556305 /var/tmp/bdevperf.sock 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1556305 ']' 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.170 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.170 { 00:31:44.170 "params": { 00:31:44.170 "name": "Nvme$subsystem", 00:31:44.170 "trtype": "$TEST_TRANSPORT", 00:31:44.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.170 "adrfam": "ipv4", 00:31:44.170 "trsvcid": "$NVMF_PORT", 00:31:44.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.170 "hdgst": ${hdgst:-false}, 00:31:44.170 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 [2024-06-11 13:59:36.950918] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:44.171 [2024-06-11 13:59:36.950981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556305 ] 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:44.171 { 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme$subsystem", 00:31:44.171 "trtype": "$TEST_TRANSPORT", 00:31:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "$NVMF_PORT", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.171 "hdgst": ${hdgst:-false}, 00:31:44.171 "ddgst": ${ddgst:-false} 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 } 00:31:44.171 EOF 00:31:44.171 )") 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:31:44.171 13:59:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme1", 00:31:44.171 "trtype": "tcp", 00:31:44.171 "traddr": "10.0.0.2", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "4420", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.171 "hdgst": false, 00:31:44.171 "ddgst": false 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 },{ 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme2", 00:31:44.171 "trtype": "tcp", 00:31:44.171 "traddr": "10.0.0.2", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "4420", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:44.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:44.171 "hdgst": false, 00:31:44.171 "ddgst": false 00:31:44.171 }, 00:31:44.171 "method": "bdev_nvme_attach_controller" 00:31:44.171 },{ 00:31:44.171 "params": { 00:31:44.171 "name": "Nvme3", 00:31:44.171 "trtype": "tcp", 00:31:44.171 "traddr": "10.0.0.2", 00:31:44.171 "adrfam": "ipv4", 00:31:44.171 "trsvcid": "4420", 00:31:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme4", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme5", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme6", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme7", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme8", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme9", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 },{ 00:31:44.172 "params": { 00:31:44.172 "name": "Nvme10", 00:31:44.172 "trtype": "tcp", 00:31:44.172 "traddr": "10.0.0.2", 00:31:44.172 "adrfam": "ipv4", 00:31:44.172 "trsvcid": "4420", 00:31:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:44.172 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:44.172 "hdgst": false, 00:31:44.172 "ddgst": false 00:31:44.172 }, 00:31:44.172 "method": "bdev_nvme_attach_controller" 00:31:44.172 }' 00:31:44.172 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.172 [2024-06-11 13:59:37.054722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.431 [2024-06-11 13:59:37.136690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.810 Running I/O for 10 seconds... 00:31:45.810 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:45.810 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:31:45.810 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:45.811 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.071 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:46.071 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:46.071 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.331 13:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.331 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.331 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:46.331 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:46.331 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1556305 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1556305 ']' 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1556305 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1556305 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1556305' 00:31:46.656 killing process with pid 1556305 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1556305 00:31:46.656 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1556305 00:31:46.656 Received shutdown signal, test time was about 1.006041 seconds 00:31:46.656 00:31:46.656 Latency(us) 00:31:46.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.656 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme1n1 : 0.97 197.40 12.34 0.00 0.00 319698.81 23173.53 271790.90 00:31:46.656 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme2n1 : 0.97 209.66 13.10 0.00 0.00 291993.87 5924.45 275146.34 00:31:46.656 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme3n1 : 1.00 256.52 16.03 0.00 0.00 235458.97 20971.52 270113.18 00:31:46.656 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme4n1 : 1.01 254.69 15.92 0.00 0.00 232641.54 17091.79 280179.51 00:31:46.656 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme5n1 : 1.00 267.82 16.74 0.00 0.00 214000.28 11324.62 260046.85 00:31:46.656 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme6n1 : 0.98 195.47 12.22 0.00 0.00 288796.67 23907.53 253335.96 00:31:46.656 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme7n1 : 0.97 201.26 12.58 0.00 0.00 271109.59 6973.03 253335.96 00:31:46.656 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme8n1 : 0.99 193.58 12.10 0.00 0.00 277559.71 25899.83 280179.51 00:31:46.656 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme9n1 : 0.99 194.77 12.17 0.00 0.00 269059.96 19503.51 278501.79 00:31:46.656 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:46.656 Verification LBA range: start 0x0 length 0x400 00:31:46.656 Nvme10n1 : 1.00 192.57 12.04 0.00 0.00 265776.06 22544.38 308700.77 00:31:46.656 =================================================================================================================== 00:31:46.657 Total : 2163.74 135.23 0.00 0.00 262930.88 5924.45 308700.77 00:31:46.916 13:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:31:47.854 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1556005 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:47.855 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:47.855 rmmod nvme_tcp 00:31:47.855 rmmod nvme_fabrics 00:31:47.855 rmmod nvme_keyring 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1556005 ']' 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1556005 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1556005 ']' 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1556005 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1556005 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1556005' 00:31:48.114 killing process with pid 1556005 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1556005 00:31:48.114 13:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1556005 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.373 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.374 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.374 13:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.912 00:31:50.912 real 0m8.384s 00:31:50.912 user 0m25.224s 00:31:50.912 sys 0m1.787s 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:50.912 ************************************ 00:31:50.912 END TEST nvmf_shutdown_tc2 00:31:50.912 ************************************ 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:50.912 ************************************ 00:31:50.912 START TEST nvmf_shutdown_tc3 00:31:50.912 ************************************ 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.912 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:50.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:50.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:50.913 Found net devices under 0000:af:00.0: cvl_0_0 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:50.913 Found net devices under 0000:af:00.1: cvl_0_1 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:50.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:31:50.913 00:31:50.913 --- 10.0.0.2 ping statistics --- 00:31:50.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.913 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:50.913 00:31:50.913 --- 10.0.0.1 ping statistics --- 00:31:50.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.913 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1557646 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1557646 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1557646 ']' 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:50.913 13:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:31:50.913 [2024-06-11 13:59:43.800907] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:50.913 [2024-06-11 13:59:43.800968] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.173 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.173 [2024-06-11 13:59:43.897527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.173 [2024-06-11 13:59:43.984707] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.173 [2024-06-11 13:59:43.984749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.173 [2024-06-11 13:59:43.984762] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:51.173 [2024-06-11 13:59:43.984774] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:51.173 [2024-06-11 13:59:43.984784] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.173 [2024-06-11 13:59:43.984831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.173 [2024-06-11 13:59:43.984940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.173 [2024-06-11 13:59:43.985048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.173 [2024-06-11 13:59:43.985048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.111 [2024-06-11 13:59:44.702532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.111 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:52.112 13:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.112 Malloc1 00:31:52.112 [2024-06-11 13:59:44.814344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.112 Malloc2 00:31:52.112 Malloc3 00:31:52.112 Malloc4 00:31:52.112 Malloc5 00:31:52.112 Malloc6 00:31:52.370 Malloc7 00:31:52.370 Malloc8 00:31:52.370 Malloc9 00:31:52.370 Malloc10 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1557918 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1557918 /var/tmp/bdevperf.sock 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1557918 ']' 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:52.370 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.371 { 00:31:52.371 "params": { 00:31:52.371 "name": "Nvme$subsystem", 00:31:52.371 "trtype": "$TEST_TRANSPORT", 00:31:52.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.371 "adrfam": "ipv4", 00:31:52.371 "trsvcid": "$NVMF_PORT", 00:31:52.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.371 "hdgst": ${hdgst:-false}, 00:31:52.371 "ddgst": ${ddgst:-false} 00:31:52.371 }, 00:31:52.371 "method": "bdev_nvme_attach_controller" 00:31:52.371 } 00:31:52.371 EOF 00:31:52.371 )") 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.371 { 00:31:52.371 "params": { 00:31:52.371 "name": "Nvme$subsystem", 00:31:52.371 "trtype": "$TEST_TRANSPORT", 00:31:52.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.371 "adrfam": "ipv4", 00:31:52.371 "trsvcid": "$NVMF_PORT", 00:31:52.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.371 "hdgst": ${hdgst:-false}, 00:31:52.371 "ddgst": ${ddgst:-false} 00:31:52.371 }, 00:31:52.371 "method": "bdev_nvme_attach_controller" 00:31:52.371 } 00:31:52.371 EOF 00:31:52.371 )") 00:31:52.371 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.629 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.629 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.629 { 00:31:52.629 "params": { 00:31:52.629 "name": "Nvme$subsystem", 00:31:52.629 "trtype": "$TEST_TRANSPORT", 00:31:52.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.629 "adrfam": "ipv4", 00:31:52.629 "trsvcid": "$NVMF_PORT", 00:31:52.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.629 "hdgst": ${hdgst:-false}, 00:31:52.629 "ddgst": ${ddgst:-false} 00:31:52.629 }, 00:31:52.629 "method": "bdev_nvme_attach_controller" 00:31:52.629 } 00:31:52.629 EOF 00:31:52.629 )") 00:31:52.629 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.629 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.629 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.629 { 00:31:52.629 "params": { 00:31:52.629 "name": "Nvme$subsystem", 00:31:52.629 "trtype": "$TEST_TRANSPORT", 00:31:52.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.629 "adrfam": "ipv4", 00:31:52.629 "trsvcid": "$NVMF_PORT", 00:31:52.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.629 "hdgst": ${hdgst:-false}, 00:31:52.629 "ddgst": ${ddgst:-false} 00:31:52.629 }, 00:31:52.629 "method": "bdev_nvme_attach_controller" 00:31:52.629 } 00:31:52.629 EOF 00:31:52.629 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 [2024-06-11 13:59:45.310130] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:52.630 [2024-06-11 13:59:45.310194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557918 ] 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.630 { 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme$subsystem", 00:31:52.630 "trtype": "$TEST_TRANSPORT", 00:31:52.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "$NVMF_PORT", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.630 "hdgst": ${hdgst:-false}, 00:31:52.630 "ddgst": ${ddgst:-false} 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 } 00:31:52.630 EOF 00:31:52.630 )") 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:31:52.630 13:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme1", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme2", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme3", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme4", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme5", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme6", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme7", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme8", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme9", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 },{ 00:31:52.630 "params": { 00:31:52.630 "name": "Nvme10", 00:31:52.630 "trtype": "tcp", 00:31:52.630 "traddr": "10.0.0.2", 00:31:52.630 "adrfam": "ipv4", 00:31:52.630 "trsvcid": "4420", 00:31:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:31:52.630 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:31:52.630 "hdgst": false, 00:31:52.630 "ddgst": false 00:31:52.630 }, 00:31:52.630 "method": "bdev_nvme_attach_controller" 00:31:52.630 }' 00:31:52.630 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.630 [2024-06-11 13:59:45.413914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.630 [2024-06-11 13:59:45.494267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.535 Running I/O for 10 seconds... 00:31:54.535 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:54.535 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:31:54.535 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:54.535 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.535 13:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:54.535 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.535 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.535 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:31:54.536 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:31:54.795 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1557646 00:31:55.068 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1557646 ']' 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1557646 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1557646 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1557646' 00:31:55.069 killing process with pid 1557646 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1557646 00:31:55.069 13:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1557646 00:31:55.069 [2024-06-11 13:59:47.882315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882432] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882514] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882632] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882727] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.882909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131dc70 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.883996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.069 [2024-06-11 13:59:47.884072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884356] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884430] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884482] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884541] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884564] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884635] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.884659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1320670 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.885991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886257] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.070 [2024-06-11 13:59:47.886270] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886282] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886560] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.886608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e110 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888240] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888339] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888411] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888560] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.071 [2024-06-11 13:59:47.888691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888727] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.888774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e5b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890087] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890362] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890432] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890503] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.890629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ef10 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891547] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.072 [2024-06-11 13:59:47.891582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891786] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.891997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.892161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131f3b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.893957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.893972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.893981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.893990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.893998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.073 [2024-06-11 13:59:47.894118] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894223] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894232] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894241] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894289] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894402] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894482] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894491] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-06-11 13:59:47.894508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13201b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:31:55.074 the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715a70 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dc30 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.074 [2024-06-11 13:59:47.894908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.074 [2024-06-11 13:59:47.894921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d2610 is same with the state(5) to be set 00:31:55.074 [2024-06-11 13:59:47.894955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.894970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.894983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbef0 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.895094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2c00 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.895232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf420 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.895366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b0d0 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.895509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a14c0 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.895644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.075 [2024-06-11 13:59:47.895737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.895749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1bd0 is same with the state(5) to be set 00:31:55.075 [2024-06-11 13:59:47.896378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.075 [2024-06-11 13:59:47.896684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.075 [2024-06-11 13:59:47.896696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.896984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.896997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.076 [2024-06-11 13:59:47.897782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.076 [2024-06-11 13:59:47.897795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.897977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.897990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:55.077 [2024-06-11 13:59:47.898247] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c8e30 was disconnected and freed. reset controller. 00:31:55.077 [2024-06-11 13:59:47.898790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.898981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.898994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.077 [2024-06-11 13:59:47.899391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.077 [2024-06-11 13:59:47.899403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.899726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.899738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.913958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.913984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.078 [2024-06-11 13:59:47.914470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.078 [2024-06-11 13:59:47.914500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.914962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.914984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.915035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b6ac0 is same with the state(5) to be set 00:31:55.079 [2024-06-11 13:59:47.915156] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17b6ac0 was disconnected and freed. reset controller. 00:31:55.079 [2024-06-11 13:59:47.915454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.079 [2024-06-11 13:59:47.915499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.079 [2024-06-11 13:59:47.915547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.079 [2024-06-11 13:59:47.915599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.079 [2024-06-11 13:59:47.915651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.915675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fbd0 is same with the state(5) to be set 00:31:55.079 [2024-06-11 13:59:47.915726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715a70 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dc30 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2610 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bbef0 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2c00 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cf420 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b0d0 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.915971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a14c0 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.916008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1bd0 (9): Bad file descriptor 00:31:55.079 [2024-06-11 13:59:47.918195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.918968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.918993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.919019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.919044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.919066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.919091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.919114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.919142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.079 [2024-06-11 13:59:47.919170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.079 [2024-06-11 13:59:47.919195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.919963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.919998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.920964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.920993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.080 [2024-06-11 13:59:47.921451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.080 [2024-06-11 13:59:47.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.921885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.921914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.922032] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17b55a0 was disconnected and freed. reset controller. 00:31:55.081 [2024-06-11 13:59:47.924372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:55.081 [2024-06-11 13:59:47.927346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:55.081 [2024-06-11 13:59:47.927774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.081 [2024-06-11 13:59:47.927821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bbef0 with addr=10.0.0.2, port=4420 00:31:55.081 [2024-06-11 13:59:47.927858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbef0 is same with the state(5) to be set 00:31:55.081 [2024-06-11 13:59:47.927898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188fbd0 (9): Bad file descriptor 00:31:55.081 [2024-06-11 13:59:47.928034] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.081 [2024-06-11 13:59:47.929754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:55.081 [2024-06-11 13:59:47.930071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.081 [2024-06-11 13:59:47.930099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186dc30 with addr=10.0.0.2, port=4420 00:31:55.081 [2024-06-11 13:59:47.930117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dc30 is same with the state(5) to be set 00:31:55.081 [2024-06-11 13:59:47.930144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bbef0 (9): Bad file descriptor 00:31:55.081 [2024-06-11 13:59:47.930247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.930980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.931046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.931085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.931130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.931172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.081 [2024-06-11 13:59:47.931211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.081 [2024-06-11 13:59:47.931232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.931959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.931978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.082 [2024-06-11 13:59:47.932627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.082 [2024-06-11 13:59:47.932648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.932963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.934965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.934986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.083 [2024-06-11 13:59:47.935831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.083 [2024-06-11 13:59:47.935849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.935870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.935892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.935937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.935958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.935979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.936975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.936995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.937226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.937250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.938822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.938849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.938880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.084 [2024-06-11 13:59:47.938919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.084 [2024-06-11 13:59:47.938938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.938959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.938978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.939975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.939994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.085 [2024-06-11 13:59:47.940472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.085 [2024-06-11 13:59:47.940497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.940977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.940995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.941291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.941307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.942977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.942993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.086 [2024-06-11 13:59:47.943739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.086 [2024-06-11 13:59:47.943761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.943980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.943997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.087 [2024-06-11 13:59:47.944906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.087 [2024-06-11 13:59:47.944924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.944941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.944960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.944976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.944998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.945486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.945506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.946907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.946933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.946958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.946979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.946998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.088 [2024-06-11 13:59:47.947789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.088 [2024-06-11 13:59:47.947805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.947825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.947845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.947864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.947880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.947898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.947915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.947933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.947953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.947976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.947992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.948971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.948989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.089 [2024-06-11 13:59:47.949265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.089 [2024-06-11 13:59:47.949283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.949300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951008] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:31:55.090 [2024-06-11 13:59:47.951066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.951969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.951987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.090 [2024-06-11 13:59:47.952419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.090 [2024-06-11 13:59:47.952440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.952967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.952983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.953280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.091 [2024-06-11 13:59:47.953297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.091 [2024-06-11 13:59:47.955484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:55.091 [2024-06-11 13:59:47.955516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:31:55.091 [2024-06-11 13:59:47.955536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:31:55.091 [2024-06-11 13:59:47.955554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:31:55.091 [2024-06-11 13:59:47.955929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.091 [2024-06-11 13:59:47.955953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f1bd0 with addr=10.0.0.2, port=4420 00:31:55.091 [2024-06-11 13:59:47.955970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1bd0 is same with the state(5) to be set 00:31:55.091 [2024-06-11 13:59:47.955994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dc30 (9): Bad file descriptor 00:31:55.091 [2024-06-11 13:59:47.956013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:55.091 [2024-06-11 13:59:47.956028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:55.091 [2024-06-11 13:59:47.956045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:55.091 [2024-06-11 13:59:47.956090] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.091 [2024-06-11 13:59:47.956121] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.091 [2024-06-11 13:59:47.956144] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.091 [2024-06-11 13:59:47.956169] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.091 [2024-06-11 13:59:47.956189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1bd0 (9): Bad file descriptor 00:31:55.091 [2024-06-11 13:59:47.956331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:31:55.091 [2024-06-11 13:59:47.956354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:31:55.091 [2024-06-11 13:59:47.956372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.091 [2024-06-11 13:59:47.956717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.091 [2024-06-11 13:59:47.956738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cf420 with addr=10.0.0.2, port=4420 00:31:55.091 [2024-06-11 13:59:47.956753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf420 is same with the state(5) to be set 00:31:55.091 [2024-06-11 13:59:47.957055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.091 [2024-06-11 13:59:47.957072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189b0d0 with addr=10.0.0.2, port=4420 00:31:55.091 [2024-06-11 13:59:47.957085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b0d0 is same with the state(5) to be set 00:31:55.091 [2024-06-11 13:59:47.957315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.091 [2024-06-11 13:59:47.957331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a14c0 with addr=10.0.0.2, port=4420 00:31:55.091 [2024-06-11 13:59:47.957344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a14c0 is same with the state(5) to be set 00:31:55.091 [2024-06-11 13:59:47.957623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.091 [2024-06-11 13:59:47.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a2c00 with addr=10.0.0.2, port=4420 00:31:55.092 [2024-06-11 13:59:47.957658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a2c00 is same with the state(5) to be set 00:31:55.092 [2024-06-11 13:59:47.957672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:55.092 [2024-06-11 13:59:47.957684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:55.092 [2024-06-11 13:59:47.957697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:55.092 [2024-06-11 13:59:47.959166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.959987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.959999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.960014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.960027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.960041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.960053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.960068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.092 [2024-06-11 13:59:47.960081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.092 [2024-06-11 13:59:47.960096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.093 [2024-06-11 13:59:47.960941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.093 [2024-06-11 13:59:47.960954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b7d70 is same with the state(5) to be set 00:31:55.093 [2024-06-11 13:59:47.962851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.353 task offset: 28928 on job bdev=Nvme4n1 fails 00:31:55.353 00:31:55.353 Latency(us) 00:31:55.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.353 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.353 Job: Nvme1n1 ended in about 0.98 seconds with error 00:31:55.353 Verification LBA range: start 0x0 length 0x400 00:31:55.353 Nvme1n1 : 0.98 130.28 8.14 65.14 0.00 323710.98 24012.39 303667.61 00:31:55.353 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.353 Job: Nvme2n1 ended in about 0.99 seconds with error 00:31:55.353 Verification LBA range: start 0x0 length 0x400 00:31:55.353 Nvme2n1 : 0.99 129.71 8.11 64.86 0.00 318201.31 46556.77 253335.96 00:31:55.353 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.353 Job: Nvme3n1 ended in about 0.99 seconds with error 00:31:55.353 Verification LBA range: start 0x0 length 0x400 00:31:55.353 Nvme3n1 : 0.99 129.18 8.07 64.59 0.00 312582.14 23802.68 281857.23 00:31:55.353 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.353 Job: Nvme4n1 ended in about 0.97 seconds with error 00:31:55.353 Verification LBA range: start 0x0 length 0x400 00:31:55.353 Nvme4n1 : 0.97 198.78 12.42 66.26 0.00 222906.37 20237.52 280179.51 00:31:55.353 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme5n1 ended in about 0.99 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme5n1 : 0.99 128.64 8.04 64.32 0.00 300077.88 23907.53 275146.34 00:31:55.354 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme6n1 ended in about 1.00 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme6n1 : 1.00 128.16 8.01 64.08 0.00 294425.40 41943.04 283534.95 00:31:55.354 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme7n1 ended in about 0.97 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme7n1 : 0.97 196.99 12.31 65.66 0.00 209466.57 17511.22 249980.52 00:31:55.354 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme8n1 ended in about 0.97 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme8n1 : 0.97 197.58 12.35 65.86 0.00 203609.29 24641.54 276824.06 00:31:55.354 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme9n1 ended in about 1.01 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme9n1 : 1.01 126.69 7.92 63.34 0.00 277568.72 17720.93 275146.34 00:31:55.354 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.354 Job: Nvme10n1 ended in about 1.00 seconds with error 00:31:55.354 Verification LBA range: start 0x0 length 0x400 00:31:55.354 Nvme10n1 : 1.00 127.65 7.98 63.82 0.00 268087.02 27682.41 312056.22 00:31:55.354 =================================================================================================================== 00:31:55.354 Total : 1493.66 93.35 647.94 0.00 267511.80 17511.22 312056.22 00:31:55.354 [2024-06-11 13:59:47.989237] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:55.354 [2024-06-11 13:59:47.989280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:31:55.354 [2024-06-11 13:59:47.989658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.989682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d2610 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.989698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d2610 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.990025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.990042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1715a70 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.990055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715a70 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.990077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cf420 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b0d0 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a14c0 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a2c00 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.990153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.990167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:55.354 [2024-06-11 13:59:47.990309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.990634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.990652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188fbd0 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.990664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fbd0 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.990679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d2610 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715a70 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.990710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.990722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.990735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:55.354 [2024-06-11 13:59:47.990754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.990766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.990783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:31:55.354 [2024-06-11 13:59:47.990799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.990811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.990823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:31:55.354 [2024-06-11 13:59:47.990838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.990851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.990864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:31:55.354 [2024-06-11 13:59:47.990908] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.990926] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.990942] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.990960] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.990977] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.990993] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.354 [2024-06-11 13:59:47.991377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188fbd0 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.991450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.991461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.991473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:31:55.354 [2024-06-11 13:59:47.991492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.991504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.991516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:31:55.354 [2024-06-11 13:59:47.991563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:31:55.354 [2024-06-11 13:59:47.991578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:31:55.354 [2024-06-11 13:59:47.991592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:31:55.354 [2024-06-11 13:59:47.991606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.991647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.991659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.991675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:31:55.354 [2024-06-11 13:59:47.991716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.354 [2024-06-11 13:59:47.992043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.992061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bbef0 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.992075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbef0 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.992305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.992321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186dc30 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.992333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dc30 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.992606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:55.354 [2024-06-11 13:59:47.992622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f1bd0 with addr=10.0.0.2, port=4420 00:31:55.354 [2024-06-11 13:59:47.992635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1bd0 is same with the state(5) to be set 00:31:55.354 [2024-06-11 13:59:47.992670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bbef0 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.992686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dc30 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.992702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f1bd0 (9): Bad file descriptor 00:31:55.354 [2024-06-11 13:59:47.992739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:31:55.354 [2024-06-11 13:59:47.992751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:31:55.354 [2024-06-11 13:59:47.992764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:31:55.355 [2024-06-11 13:59:47.992778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:31:55.355 [2024-06-11 13:59:47.992789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:31:55.355 [2024-06-11 13:59:47.992801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:31:55.355 [2024-06-11 13:59:47.992814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:31:55.355 [2024-06-11 13:59:47.992826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:31:55.355 [2024-06-11 13:59:47.992838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:31:55.355 [2024-06-11 13:59:47.992870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.355 [2024-06-11 13:59:47.992882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.355 [2024-06-11 13:59:47.992892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.615 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:31:55.615 13:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:31:56.552 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1557918 00:31:56.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1557918) - No such process 00:31:56.552 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:31:56.552 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:31:56.552 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.553 rmmod nvme_tcp 00:31:56.553 rmmod nvme_fabrics 00:31:56.553 rmmod nvme_keyring 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.553 13:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:59.128 00:31:59.128 real 0m8.087s 00:31:59.128 user 0m20.039s 00:31:59.128 sys 0m1.611s 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:31:59.128 ************************************ 00:31:59.128 END TEST nvmf_shutdown_tc3 00:31:59.128 ************************************ 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:31:59.128 00:31:59.128 real 0m33.901s 00:31:59.128 user 1m21.670s 00:31:59.128 sys 0m10.837s 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:59.128 13:59:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:59.128 ************************************ 00:31:59.128 END TEST nvmf_shutdown 00:31:59.128 ************************************ 00:31:59.128 13:59:51 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.128 13:59:51 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.128 13:59:51 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:31:59.128 13:59:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:59.128 13:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.128 ************************************ 00:31:59.128 START TEST nvmf_multicontroller 00:31:59.128 ************************************ 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:31:59.128 * Looking for test storage... 00:31:59.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.128 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.129 13:59:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:05.702 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:05.702 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:05.702 Found net devices under 0000:af:00.0: cvl_0_0 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:05.702 Found net devices under 0000:af:00.1: cvl_0_1 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.702 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:05.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:32:05.966 00:32:05.966 --- 10.0.0.2 ping statistics --- 00:32:05.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.966 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:32:05.966 00:32:05.966 --- 10.0.0.1 ping statistics --- 00:32:05.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.966 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1562295 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1562295 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1562295 ']' 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:05.966 13:59:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:06.228 [2024-06-11 13:59:58.911013] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:06.228 [2024-06-11 13:59:58.911075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.228 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.228 [2024-06-11 13:59:59.010391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:06.228 [2024-06-11 13:59:59.091293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.228 [2024-06-11 13:59:59.091343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.228 [2024-06-11 13:59:59.091357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.228 [2024-06-11 13:59:59.091374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.228 [2024-06-11 13:59:59.091384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.228 [2024-06-11 13:59:59.091505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.228 [2024-06-11 13:59:59.091633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.228 [2024-06-11 13:59:59.091635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.168 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 [2024-06-11 13:59:59.832618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 Malloc0 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 [2024-06-11 13:59:59.898630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 [2024-06-11 13:59:59.906539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 Malloc1 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1562576 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1562576 /var/tmp/bdevperf.sock 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1562576 ']' 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:07.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:07.169 13:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.108 14:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:08.108 14:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:32:08.108 14:00:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:32:08.108 14:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.108 14:00:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 NVMe0n1 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.368 1 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 request: 00:32:08.368 { 00:32:08.368 "name": "NVMe0", 00:32:08.368 "trtype": "tcp", 00:32:08.368 "traddr": "10.0.0.2", 00:32:08.368 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:32:08.368 "hostaddr": "10.0.0.2", 00:32:08.368 "hostsvcid": "60000", 00:32:08.368 "adrfam": "ipv4", 00:32:08.368 "trsvcid": "4420", 00:32:08.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.368 "method": "bdev_nvme_attach_controller", 00:32:08.368 "req_id": 1 00:32:08.368 } 00:32:08.368 Got JSON-RPC error response 00:32:08.368 response: 00:32:08.368 { 00:32:08.368 "code": -114, 00:32:08.368 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:08.368 } 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 request: 00:32:08.368 { 00:32:08.368 "name": "NVMe0", 00:32:08.368 "trtype": "tcp", 00:32:08.368 "traddr": "10.0.0.2", 00:32:08.368 "hostaddr": "10.0.0.2", 00:32:08.368 "hostsvcid": "60000", 00:32:08.368 "adrfam": "ipv4", 00:32:08.368 "trsvcid": "4420", 00:32:08.368 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:08.368 "method": "bdev_nvme_attach_controller", 00:32:08.368 "req_id": 1 00:32:08.368 } 00:32:08.368 Got JSON-RPC error response 00:32:08.368 response: 00:32:08.368 { 00:32:08.368 "code": -114, 00:32:08.368 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:08.368 } 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 request: 00:32:08.368 { 00:32:08.368 "name": "NVMe0", 00:32:08.368 "trtype": "tcp", 00:32:08.368 "traddr": "10.0.0.2", 00:32:08.368 "hostaddr": "10.0.0.2", 00:32:08.368 "hostsvcid": "60000", 00:32:08.368 "adrfam": "ipv4", 00:32:08.368 "trsvcid": "4420", 00:32:08.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.368 "multipath": "disable", 00:32:08.368 "method": "bdev_nvme_attach_controller", 00:32:08.368 "req_id": 1 00:32:08.368 } 00:32:08.368 Got JSON-RPC error response 00:32:08.368 response: 00:32:08.368 { 00:32:08.368 "code": -114, 00:32:08.368 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:32:08.368 } 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.368 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.368 request: 00:32:08.368 { 00:32:08.368 "name": "NVMe0", 00:32:08.368 "trtype": "tcp", 00:32:08.368 "traddr": "10.0.0.2", 00:32:08.368 "hostaddr": "10.0.0.2", 00:32:08.369 "hostsvcid": "60000", 00:32:08.369 "adrfam": "ipv4", 00:32:08.369 "trsvcid": "4420", 00:32:08.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.369 "multipath": "failover", 00:32:08.369 "method": "bdev_nvme_attach_controller", 00:32:08.369 "req_id": 1 00:32:08.369 } 00:32:08.369 Got JSON-RPC error response 00:32:08.369 response: 00:32:08.369 { 00:32:08.369 "code": -114, 00:32:08.369 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:32:08.369 } 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.369 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.627 00:32:08.627 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.628 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:32:08.628 14:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:10.013 0 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1562576 ']' 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1562576' 00:32:10.013 killing process with pid 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1562576 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:32:10.013 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:32:10.014 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:10.014 [2024-06-11 14:00:00.014279] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:10.014 [2024-06-11 14:00:00.014351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562576 ] 00:32:10.014 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.014 [2024-06-11 14:00:00.112377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.014 [2024-06-11 14:00:00.198923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.014 [2024-06-11 14:00:01.413939] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name fb01b2d3-6f4b-4761-b970-4a64af901552 already exists 00:32:10.014 [2024-06-11 14:00:01.413976] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:fb01b2d3-6f4b-4761-b970-4a64af901552 alias for bdev NVMe1n1 00:32:10.014 [2024-06-11 14:00:01.413992] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:32:10.014 Running I/O for 1 seconds... 00:32:10.014 00:32:10.014 Latency(us) 00:32:10.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.014 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:32:10.014 NVMe0n1 : 1.00 19106.58 74.64 0.00 0.00 6688.66 4168.09 13841.20 00:32:10.014 =================================================================================================================== 00:32:10.014 Total : 19106.58 74.64 0.00 0.00 6688.66 4168.09 13841.20 00:32:10.014 Received shutdown signal, test time was about 1.000000 seconds 00:32:10.014 00:32:10.014 Latency(us) 00:32:10.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.014 =================================================================================================================== 00:32:10.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.014 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.014 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.014 rmmod nvme_tcp 00:32:10.014 rmmod nvme_fabrics 00:32:10.014 rmmod nvme_keyring 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1562295 ']' 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1562295 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1562295 ']' 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1562295 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1562295 00:32:10.273 14:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:10.273 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:10.273 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1562295' 00:32:10.273 killing process with pid 1562295 00:32:10.273 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1562295 00:32:10.273 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1562295 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:10.531 14:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.520 14:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:12.520 00:32:12.520 real 0m13.637s 00:32:12.520 user 0m17.402s 00:32:12.520 sys 0m6.446s 00:32:12.520 14:00:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:12.520 14:00:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:32:12.520 ************************************ 00:32:12.520 END TEST nvmf_multicontroller 00:32:12.520 ************************************ 00:32:12.520 14:00:05 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:12.520 14:00:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:12.520 14:00:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:12.520 14:00:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:12.520 ************************************ 00:32:12.520 START TEST nvmf_aer 00:32:12.521 ************************************ 00:32:12.521 14:00:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:32:12.780 * Looking for test storage... 00:32:12.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:32:12.780 14:00:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:19.351 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:19.351 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:19.351 Found net devices under 0000:af:00.0: cvl_0_0 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:19.351 Found net devices under 0000:af:00.1: cvl_0_1 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.351 14:00:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:19.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:32:19.352 00:32:19.352 --- 10.0.0.2 ping statistics --- 00:32:19.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.352 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:32:19.352 00:32:19.352 --- 10.0.0.1 ping statistics --- 00:32:19.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.352 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1567115 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1567115 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1567115 ']' 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:19.352 14:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:19.611 [2024-06-11 14:00:12.264033] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:19.611 [2024-06-11 14:00:12.264095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.611 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.611 [2024-06-11 14:00:12.375032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:19.611 [2024-06-11 14:00:12.458036] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.611 [2024-06-11 14:00:12.458087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.611 [2024-06-11 14:00:12.458101] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.611 [2024-06-11 14:00:12.458113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.611 [2024-06-11 14:00:12.458122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.611 [2024-06-11 14:00:12.458182] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.611 [2024-06-11 14:00:12.458275] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:19.611 [2024-06-11 14:00:12.458370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.611 [2024-06-11 14:00:12.458370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 [2024-06-11 14:00:13.230064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 Malloc0 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.550 [2024-06-11 14:00:13.285807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.550 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.551 [ 00:32:20.551 { 00:32:20.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:20.551 "subtype": "Discovery", 00:32:20.551 "listen_addresses": [], 00:32:20.551 "allow_any_host": true, 00:32:20.551 "hosts": [] 00:32:20.551 }, 00:32:20.551 { 00:32:20.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.551 "subtype": "NVMe", 00:32:20.551 "listen_addresses": [ 00:32:20.551 { 00:32:20.551 "trtype": "TCP", 00:32:20.551 "adrfam": "IPv4", 00:32:20.551 "traddr": "10.0.0.2", 00:32:20.551 "trsvcid": "4420" 00:32:20.551 } 00:32:20.551 ], 00:32:20.551 "allow_any_host": true, 00:32:20.551 "hosts": [], 00:32:20.551 "serial_number": "SPDK00000000000001", 00:32:20.551 "model_number": "SPDK bdev Controller", 00:32:20.551 "max_namespaces": 2, 00:32:20.551 "min_cntlid": 1, 00:32:20.551 "max_cntlid": 65519, 00:32:20.551 "namespaces": [ 00:32:20.551 { 00:32:20.551 "nsid": 1, 00:32:20.551 "bdev_name": "Malloc0", 00:32:20.551 "name": "Malloc0", 00:32:20.551 "nguid": "42F91333B78A49BCB58B8F2FBB46679D", 00:32:20.551 "uuid": "42f91333-b78a-49bc-b58b-8f2fbb46679d" 00:32:20.551 } 00:32:20.551 ] 00:32:20.551 } 00:32:20.551 ] 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1567399 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:20.551 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:32:20.551 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:20.810 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:20.810 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.811 Malloc1 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:20.811 Asynchronous Event Request test 00:32:20.811 Attaching to 10.0.0.2 00:32:20.811 Attached to 10.0.0.2 00:32:20.811 Registering asynchronous event callbacks... 00:32:20.811 Starting namespace attribute notice tests for all controllers... 00:32:20.811 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:32:20.811 aer_cb - Changed Namespace 00:32:20.811 Cleaning up... 00:32:20.811 [ 00:32:20.811 { 00:32:20.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:20.811 "subtype": "Discovery", 00:32:20.811 "listen_addresses": [], 00:32:20.811 "allow_any_host": true, 00:32:20.811 "hosts": [] 00:32:20.811 }, 00:32:20.811 { 00:32:20.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.811 "subtype": "NVMe", 00:32:20.811 "listen_addresses": [ 00:32:20.811 { 00:32:20.811 "trtype": "TCP", 00:32:20.811 "adrfam": "IPv4", 00:32:20.811 "traddr": "10.0.0.2", 00:32:20.811 "trsvcid": "4420" 00:32:20.811 } 00:32:20.811 ], 00:32:20.811 "allow_any_host": true, 00:32:20.811 "hosts": [], 00:32:20.811 "serial_number": "SPDK00000000000001", 00:32:20.811 "model_number": "SPDK bdev Controller", 00:32:20.811 "max_namespaces": 2, 00:32:20.811 "min_cntlid": 1, 00:32:20.811 "max_cntlid": 65519, 00:32:20.811 "namespaces": [ 00:32:20.811 { 00:32:20.811 "nsid": 1, 00:32:20.811 "bdev_name": "Malloc0", 00:32:20.811 "name": "Malloc0", 00:32:20.811 "nguid": "42F91333B78A49BCB58B8F2FBB46679D", 00:32:20.811 "uuid": "42f91333-b78a-49bc-b58b-8f2fbb46679d" 00:32:20.811 }, 00:32:20.811 { 00:32:20.811 "nsid": 2, 00:32:20.811 "bdev_name": "Malloc1", 00:32:20.811 "name": "Malloc1", 00:32:20.811 "nguid": "253BA12229C84E66821B2FF4AC59C33E", 00:32:20.811 "uuid": "253ba122-29c8-4e66-821b-2ff4ac59c33e" 00:32:20.811 } 00:32:20.811 ] 00:32:20.811 } 00:32:20.811 ] 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1567399 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.811 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.070 rmmod nvme_tcp 00:32:21.070 rmmod nvme_fabrics 00:32:21.070 rmmod nvme_keyring 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1567115 ']' 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1567115 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1567115 ']' 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1567115 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1567115 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1567115' 00:32:21.070 killing process with pid 1567115 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1567115 00:32:21.070 14:00:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1567115 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.330 14:00:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.865 14:00:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.865 00:32:23.865 real 0m10.748s 00:32:23.865 user 0m8.204s 00:32:23.865 sys 0m5.688s 00:32:23.865 14:00:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:23.865 14:00:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:32:23.865 ************************************ 00:32:23.865 END TEST nvmf_aer 00:32:23.865 ************************************ 00:32:23.865 14:00:16 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:23.865 14:00:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:23.865 14:00:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:23.865 14:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.865 ************************************ 00:32:23.865 START TEST nvmf_async_init 00:32:23.865 ************************************ 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:32:23.865 * Looking for test storage... 00:32:23.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.865 14:00:16 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4a0a60599bba4f04a87f7957aa2c5113 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.866 14:00:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:30.439 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:30.439 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:30.439 Found net devices under 0000:af:00.0: cvl_0_0 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:30.439 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:30.440 Found net devices under 0000:af:00.1: cvl_0_1 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.440 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:30.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:32:30.699 00:32:30.699 --- 10.0.0.2 ping statistics --- 00:32:30.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.699 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:32:30.699 00:32:30.699 --- 10.0.0.1 ping statistics --- 00:32:30.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.699 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1571096 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1571096 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1571096 ']' 00:32:30.699 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.700 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:30.700 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.700 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:30.700 14:00:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:30.958 [2024-06-11 14:00:23.618751] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:30.958 [2024-06-11 14:00:23.618811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.958 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.958 [2024-06-11 14:00:23.726243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.958 [2024-06-11 14:00:23.811499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.958 [2024-06-11 14:00:23.811544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.958 [2024-06-11 14:00:23.811557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.958 [2024-06-11 14:00:23.811569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.959 [2024-06-11 14:00:23.811579] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.959 [2024-06-11 14:00:23.811616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.893 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:31.893 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:32:31.893 14:00:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:31.893 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 [2024-06-11 14:00:24.568046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 null0 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4a0a60599bba4f04a87f7957aa2c5113 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:31.894 [2024-06-11 14:00:24.612313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.894 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.153 nvme0n1 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.153 [ 00:32:32.153 { 00:32:32.153 "name": "nvme0n1", 00:32:32.153 "aliases": [ 00:32:32.153 "4a0a6059-9bba-4f04-a87f-7957aa2c5113" 00:32:32.153 ], 00:32:32.153 "product_name": "NVMe disk", 00:32:32.153 "block_size": 512, 00:32:32.153 "num_blocks": 2097152, 00:32:32.153 "uuid": "4a0a6059-9bba-4f04-a87f-7957aa2c5113", 00:32:32.153 "assigned_rate_limits": { 00:32:32.153 "rw_ios_per_sec": 0, 00:32:32.153 "rw_mbytes_per_sec": 0, 00:32:32.153 "r_mbytes_per_sec": 0, 00:32:32.153 "w_mbytes_per_sec": 0 00:32:32.153 }, 00:32:32.153 "claimed": false, 00:32:32.153 "zoned": false, 00:32:32.153 "supported_io_types": { 00:32:32.153 "read": true, 00:32:32.153 "write": true, 00:32:32.153 "unmap": false, 00:32:32.153 "write_zeroes": true, 00:32:32.153 "flush": true, 00:32:32.153 "reset": true, 00:32:32.153 "compare": true, 00:32:32.153 "compare_and_write": true, 00:32:32.153 "abort": true, 00:32:32.153 "nvme_admin": true, 00:32:32.153 "nvme_io": true 00:32:32.153 }, 00:32:32.153 "memory_domains": [ 00:32:32.153 { 00:32:32.153 "dma_device_id": "system", 00:32:32.153 "dma_device_type": 1 00:32:32.153 } 00:32:32.153 ], 00:32:32.153 "driver_specific": { 00:32:32.153 "nvme": [ 00:32:32.153 { 00:32:32.153 "trid": { 00:32:32.153 "trtype": "TCP", 00:32:32.153 "adrfam": "IPv4", 00:32:32.153 "traddr": "10.0.0.2", 00:32:32.153 "trsvcid": "4420", 00:32:32.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:32.153 }, 00:32:32.153 "ctrlr_data": { 00:32:32.153 "cntlid": 1, 00:32:32.153 "vendor_id": "0x8086", 00:32:32.153 "model_number": "SPDK bdev Controller", 00:32:32.153 "serial_number": "00000000000000000000", 00:32:32.153 "firmware_revision": "24.09", 00:32:32.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.153 "oacs": { 00:32:32.153 "security": 0, 00:32:32.153 "format": 0, 00:32:32.153 "firmware": 0, 00:32:32.153 "ns_manage": 0 00:32:32.153 }, 00:32:32.153 "multi_ctrlr": true, 00:32:32.153 "ana_reporting": false 00:32:32.153 }, 00:32:32.153 "vs": { 00:32:32.153 "nvme_version": "1.3" 00:32:32.153 }, 00:32:32.153 "ns_data": { 00:32:32.153 "id": 1, 00:32:32.153 "can_share": true 00:32:32.153 } 00:32:32.153 } 00:32:32.153 ], 00:32:32.153 "mp_policy": "active_passive" 00:32:32.153 } 00:32:32.153 } 00:32:32.153 ] 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.153 14:00:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.153 [2024-06-11 14:00:24.885451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:32.153 [2024-06-11 14:00:24.885529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1872410 (9): Bad file descriptor 00:32:32.153 [2024-06-11 14:00:25.017592] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.153 [ 00:32:32.153 { 00:32:32.153 "name": "nvme0n1", 00:32:32.153 "aliases": [ 00:32:32.153 "4a0a6059-9bba-4f04-a87f-7957aa2c5113" 00:32:32.153 ], 00:32:32.153 "product_name": "NVMe disk", 00:32:32.153 "block_size": 512, 00:32:32.153 "num_blocks": 2097152, 00:32:32.153 "uuid": "4a0a6059-9bba-4f04-a87f-7957aa2c5113", 00:32:32.153 "assigned_rate_limits": { 00:32:32.153 "rw_ios_per_sec": 0, 00:32:32.153 "rw_mbytes_per_sec": 0, 00:32:32.153 "r_mbytes_per_sec": 0, 00:32:32.153 "w_mbytes_per_sec": 0 00:32:32.153 }, 00:32:32.153 "claimed": false, 00:32:32.153 "zoned": false, 00:32:32.153 "supported_io_types": { 00:32:32.153 "read": true, 00:32:32.153 "write": true, 00:32:32.153 "unmap": false, 00:32:32.153 "write_zeroes": true, 00:32:32.153 "flush": true, 00:32:32.153 "reset": true, 00:32:32.153 "compare": true, 00:32:32.153 "compare_and_write": true, 00:32:32.153 "abort": true, 00:32:32.153 "nvme_admin": true, 00:32:32.153 "nvme_io": true 00:32:32.153 }, 00:32:32.153 "memory_domains": [ 00:32:32.153 { 00:32:32.153 "dma_device_id": "system", 00:32:32.153 "dma_device_type": 1 00:32:32.153 } 00:32:32.153 ], 00:32:32.153 "driver_specific": { 00:32:32.153 "nvme": [ 00:32:32.153 { 00:32:32.153 "trid": { 00:32:32.153 "trtype": "TCP", 00:32:32.153 "adrfam": "IPv4", 00:32:32.153 "traddr": "10.0.0.2", 00:32:32.153 "trsvcid": "4420", 00:32:32.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:32.153 }, 00:32:32.153 "ctrlr_data": { 00:32:32.153 "cntlid": 2, 00:32:32.153 "vendor_id": "0x8086", 00:32:32.153 "model_number": "SPDK bdev Controller", 00:32:32.153 "serial_number": "00000000000000000000", 00:32:32.153 "firmware_revision": "24.09", 00:32:32.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.153 "oacs": { 00:32:32.153 "security": 0, 00:32:32.153 "format": 0, 00:32:32.153 "firmware": 0, 00:32:32.153 "ns_manage": 0 00:32:32.153 }, 00:32:32.153 "multi_ctrlr": true, 00:32:32.153 "ana_reporting": false 00:32:32.153 }, 00:32:32.153 "vs": { 00:32:32.153 "nvme_version": "1.3" 00:32:32.153 }, 00:32:32.153 "ns_data": { 00:32:32.153 "id": 1, 00:32:32.153 "can_share": true 00:32:32.153 } 00:32:32.153 } 00:32:32.153 ], 00:32:32.153 "mp_policy": "active_passive" 00:32:32.153 } 00:32:32.153 } 00:32:32.153 ] 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.153 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DjRe3noayI 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DjRe3noayI 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.412 [2024-06-11 14:00:25.090149] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:32.412 [2024-06-11 14:00:25.090280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.412 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DjRe3noayI 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.413 [2024-06-11 14:00:25.098170] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DjRe3noayI 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.413 [2024-06-11 14:00:25.110203] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.413 [2024-06-11 14:00:25.110252] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:32:32.413 nvme0n1 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.413 [ 00:32:32.413 { 00:32:32.413 "name": "nvme0n1", 00:32:32.413 "aliases": [ 00:32:32.413 "4a0a6059-9bba-4f04-a87f-7957aa2c5113" 00:32:32.413 ], 00:32:32.413 "product_name": "NVMe disk", 00:32:32.413 "block_size": 512, 00:32:32.413 "num_blocks": 2097152, 00:32:32.413 "uuid": "4a0a6059-9bba-4f04-a87f-7957aa2c5113", 00:32:32.413 "assigned_rate_limits": { 00:32:32.413 "rw_ios_per_sec": 0, 00:32:32.413 "rw_mbytes_per_sec": 0, 00:32:32.413 "r_mbytes_per_sec": 0, 00:32:32.413 "w_mbytes_per_sec": 0 00:32:32.413 }, 00:32:32.413 "claimed": false, 00:32:32.413 "zoned": false, 00:32:32.413 "supported_io_types": { 00:32:32.413 "read": true, 00:32:32.413 "write": true, 00:32:32.413 "unmap": false, 00:32:32.413 "write_zeroes": true, 00:32:32.413 "flush": true, 00:32:32.413 "reset": true, 00:32:32.413 "compare": true, 00:32:32.413 "compare_and_write": true, 00:32:32.413 "abort": true, 00:32:32.413 "nvme_admin": true, 00:32:32.413 "nvme_io": true 00:32:32.413 }, 00:32:32.413 "memory_domains": [ 00:32:32.413 { 00:32:32.413 "dma_device_id": "system", 00:32:32.413 "dma_device_type": 1 00:32:32.413 } 00:32:32.413 ], 00:32:32.413 "driver_specific": { 00:32:32.413 "nvme": [ 00:32:32.413 { 00:32:32.413 "trid": { 00:32:32.413 "trtype": "TCP", 00:32:32.413 "adrfam": "IPv4", 00:32:32.413 "traddr": "10.0.0.2", 00:32:32.413 "trsvcid": "4421", 00:32:32.413 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:32.413 }, 00:32:32.413 "ctrlr_data": { 00:32:32.413 "cntlid": 3, 00:32:32.413 "vendor_id": "0x8086", 00:32:32.413 "model_number": "SPDK bdev Controller", 00:32:32.413 "serial_number": "00000000000000000000", 00:32:32.413 "firmware_revision": "24.09", 00:32:32.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.413 "oacs": { 00:32:32.413 "security": 0, 00:32:32.413 "format": 0, 00:32:32.413 "firmware": 0, 00:32:32.413 "ns_manage": 0 00:32:32.413 }, 00:32:32.413 "multi_ctrlr": true, 00:32:32.413 "ana_reporting": false 00:32:32.413 }, 00:32:32.413 "vs": { 00:32:32.413 "nvme_version": "1.3" 00:32:32.413 }, 00:32:32.413 "ns_data": { 00:32:32.413 "id": 1, 00:32:32.413 "can_share": true 00:32:32.413 } 00:32:32.413 } 00:32:32.413 ], 00:32:32.413 "mp_policy": "active_passive" 00:32:32.413 } 00:32:32.413 } 00:32:32.413 ] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DjRe3noayI 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:32.413 rmmod nvme_tcp 00:32:32.413 rmmod nvme_fabrics 00:32:32.413 rmmod nvme_keyring 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1571096 ']' 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1571096 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1571096 ']' 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1571096 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:32.413 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1571096 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1571096' 00:32:32.672 killing process with pid 1571096 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1571096 00:32:32.672 [2024-06-11 14:00:25.370474] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:32.672 [2024-06-11 14:00:25.370528] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1571096 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.672 14:00:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.217 14:00:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:35.217 00:32:35.217 real 0m11.381s 00:32:35.217 user 0m4.104s 00:32:35.217 sys 0m6.015s 00:32:35.217 14:00:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:35.217 14:00:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:32:35.217 ************************************ 00:32:35.217 END TEST nvmf_async_init 00:32:35.217 ************************************ 00:32:35.217 14:00:27 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.217 ************************************ 00:32:35.217 START TEST dma 00:32:35.217 ************************************ 00:32:35.217 14:00:27 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:32:35.217 * Looking for test storage... 00:32:35.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:35.217 14:00:27 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.217 14:00:27 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.217 14:00:27 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.217 14:00:27 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.217 14:00:27 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.217 14:00:27 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.217 14:00:27 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.217 14:00:27 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:32:35.217 14:00:27 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:35.217 14:00:27 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:35.217 14:00:27 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:32:35.217 14:00:27 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:32:35.217 00:32:35.217 real 0m0.135s 00:32:35.217 user 0m0.065s 00:32:35.217 sys 0m0.080s 00:32:35.217 14:00:27 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:35.217 14:00:27 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:32:35.217 ************************************ 00:32:35.217 END TEST dma 00:32:35.217 ************************************ 00:32:35.217 14:00:27 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:35.217 14:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.217 ************************************ 00:32:35.217 START TEST nvmf_identify 00:32:35.217 ************************************ 00:32:35.217 14:00:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:32:35.217 * Looking for test storage... 00:32:35.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.217 14:00:28 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:32:35.218 14:00:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:32:41.784 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:41.785 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:41.785 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:41.785 Found net devices under 0000:af:00.0: cvl_0_0 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:41.785 Found net devices under 0000:af:00.1: cvl_0_1 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:32:41.785 00:32:41.785 --- 10.0.0.2 ping statistics --- 00:32:41.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.785 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:32:41.785 00:32:41.785 --- 10.0.0.1 ping statistics --- 00:32:41.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.785 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:32:41.785 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1575114 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1575114 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1575114 ']' 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:41.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:41.786 14:00:34 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:41.786 [2024-06-11 14:00:34.560462] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:41.786 [2024-06-11 14:00:34.560530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:41.786 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.786 [2024-06-11 14:00:34.667636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:42.052 [2024-06-11 14:00:34.756603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.053 [2024-06-11 14:00:34.756645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.053 [2024-06-11 14:00:34.756659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.053 [2024-06-11 14:00:34.756670] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.053 [2024-06-11 14:00:34.756680] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.053 [2024-06-11 14:00:34.756735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.053 [2024-06-11 14:00:34.756830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:42.053 [2024-06-11 14:00:34.756940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.053 [2024-06-11 14:00:34.756940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.623 [2024-06-11 14:00:35.474522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.623 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 Malloc0 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 [2024-06-11 14:00:35.574400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:42.886 [ 00:32:42.886 { 00:32:42.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:42.886 "subtype": "Discovery", 00:32:42.886 "listen_addresses": [ 00:32:42.886 { 00:32:42.886 "trtype": "TCP", 00:32:42.886 "adrfam": "IPv4", 00:32:42.886 "traddr": "10.0.0.2", 00:32:42.886 "trsvcid": "4420" 00:32:42.886 } 00:32:42.886 ], 00:32:42.886 "allow_any_host": true, 00:32:42.886 "hosts": [] 00:32:42.886 }, 00:32:42.886 { 00:32:42.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.886 "subtype": "NVMe", 00:32:42.886 "listen_addresses": [ 00:32:42.886 { 00:32:42.886 "trtype": "TCP", 00:32:42.886 "adrfam": "IPv4", 00:32:42.886 "traddr": "10.0.0.2", 00:32:42.886 "trsvcid": "4420" 00:32:42.886 } 00:32:42.886 ], 00:32:42.886 "allow_any_host": true, 00:32:42.886 "hosts": [], 00:32:42.886 "serial_number": "SPDK00000000000001", 00:32:42.886 "model_number": "SPDK bdev Controller", 00:32:42.886 "max_namespaces": 32, 00:32:42.886 "min_cntlid": 1, 00:32:42.886 "max_cntlid": 65519, 00:32:42.886 "namespaces": [ 00:32:42.886 { 00:32:42.886 "nsid": 1, 00:32:42.886 "bdev_name": "Malloc0", 00:32:42.886 "name": "Malloc0", 00:32:42.886 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:32:42.886 "eui64": "ABCDEF0123456789", 00:32:42.886 "uuid": "46fd6574-7e73-40e8-97c3-a08a22e7e33c" 00:32:42.886 } 00:32:42.886 ] 00:32:42.886 } 00:32:42.886 ] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.886 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:32:42.886 [2024-06-11 14:00:35.632095] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:42.886 [2024-06-11 14:00:35.632135] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575343 ] 00:32:42.886 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.886 [2024-06-11 14:00:35.665972] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:32:42.886 [2024-06-11 14:00:35.666032] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:42.886 [2024-06-11 14:00:35.666041] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:42.886 [2024-06-11 14:00:35.666056] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:42.886 [2024-06-11 14:00:35.666068] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:42.886 [2024-06-11 14:00:35.669536] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:32:42.886 [2024-06-11 14:00:35.669580] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17c5f00 0 00:32:42.886 [2024-06-11 14:00:35.677493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:42.886 [2024-06-11 14:00:35.677511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:42.886 [2024-06-11 14:00:35.677518] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:42.886 [2024-06-11 14:00:35.677525] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:42.886 [2024-06-11 14:00:35.677576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.677585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.677592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.886 [2024-06-11 14:00:35.677610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:42.886 [2024-06-11 14:00:35.677631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.886 [2024-06-11 14:00:35.684488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.886 [2024-06-11 14:00:35.684501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.886 [2024-06-11 14:00:35.684508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.886 [2024-06-11 14:00:35.684531] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:42.886 [2024-06-11 14:00:35.684541] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:32:42.886 [2024-06-11 14:00:35.684550] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:32:42.886 [2024-06-11 14:00:35.684567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.886 [2024-06-11 14:00:35.684592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.886 [2024-06-11 14:00:35.684611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.886 [2024-06-11 14:00:35.684819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.886 [2024-06-11 14:00:35.684829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.886 [2024-06-11 14:00:35.684835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.886 [2024-06-11 14:00:35.684851] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:32:42.886 [2024-06-11 14:00:35.684863] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:32:42.886 [2024-06-11 14:00:35.684875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.684888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.886 [2024-06-11 14:00:35.684899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.886 [2024-06-11 14:00:35.684915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.886 [2024-06-11 14:00:35.685021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.886 [2024-06-11 14:00:35.685031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.886 [2024-06-11 14:00:35.685040] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.886 [2024-06-11 14:00:35.685047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.886 [2024-06-11 14:00:35.685056] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:32:42.886 [2024-06-11 14:00:35.685069] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.685102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.887 [2024-06-11 14:00:35.685118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.685219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.685228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.685235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.685250] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.685288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.887 [2024-06-11 14:00:35.685303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.685407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.685416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.685422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.685437] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:32:42.887 [2024-06-11 14:00:35.685446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685458] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685567] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:32:42.887 [2024-06-11 14:00:35.685577] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.685613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.887 [2024-06-11 14:00:35.685629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.685804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.685814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.685820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.685835] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:42.887 [2024-06-11 14:00:35.685850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.685863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.685873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.887 [2024-06-11 14:00:35.685889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.685994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.686004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.686011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.686025] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:42.887 [2024-06-11 14:00:35.686034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686048] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:32:42.887 [2024-06-11 14:00:35.686061] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.887 [2024-06-11 14:00:35.686111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.686320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:42.887 [2024-06-11 14:00:35.686329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:42.887 [2024-06-11 14:00:35.686335] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686342] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c5f00): datao=0, datal=4096, cccid=0 00:32:42.887 [2024-06-11 14:00:35.686351] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1830e40) on tqpair(0x17c5f00): expected_datao=0, payload_size=4096 00:32:42.887 [2024-06-11 14:00:35.686359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686405] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.686499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.686505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.686527] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:32:42.887 [2024-06-11 14:00:35.686536] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:32:42.887 [2024-06-11 14:00:35.686544] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:32:42.887 [2024-06-11 14:00:35.686553] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:32:42.887 [2024-06-11 14:00:35.686561] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:32:42.887 [2024-06-11 14:00:35.686570] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:42.887 [2024-06-11 14:00:35.686642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.887 [2024-06-11 14:00:35.686751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.887 [2024-06-11 14:00:35.686761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.887 [2024-06-11 14:00:35.686767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.887 [2024-06-11 14:00:35.686786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.887 [2024-06-11 14:00:35.686819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.887 [2024-06-11 14:00:35.686851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.887 [2024-06-11 14:00:35.686883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.887 [2024-06-11 14:00:35.686897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.887 [2024-06-11 14:00:35.686905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.887 [2024-06-11 14:00:35.686916] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686933] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:42.887 [2024-06-11 14:00:35.686945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.686951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.686962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.888 [2024-06-11 14:00:35.686980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830e40, cid 0, qid 0 00:32:42.888 [2024-06-11 14:00:35.686989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1830fc0, cid 1, qid 0 00:32:42.888 [2024-06-11 14:00:35.686997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831140, cid 2, qid 0 00:32:42.888 [2024-06-11 14:00:35.687004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.888 [2024-06-11 14:00:35.687012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831440, cid 4, qid 0 00:32:42.888 [2024-06-11 14:00:35.687150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.687160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.687168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831440) on tqpair=0x17c5f00 00:32:42.888 [2024-06-11 14:00:35.687184] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:32:42.888 [2024-06-11 14:00:35.687193] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:32:42.888 [2024-06-11 14:00:35.687209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.687226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.888 [2024-06-11 14:00:35.687242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831440, cid 4, qid 0 00:32:42.888 [2024-06-11 14:00:35.687365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:42.888 [2024-06-11 14:00:35.687375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:42.888 [2024-06-11 14:00:35.687381] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687388] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c5f00): datao=0, datal=4096, cccid=4 00:32:42.888 [2024-06-11 14:00:35.687398] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1831440) on tqpair(0x17c5f00): expected_datao=0, payload_size=4096 00:32:42.888 [2024-06-11 14:00:35.687407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687417] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687424] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.687492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.687499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831440) on tqpair=0x17c5f00 00:32:42.888 [2024-06-11 14:00:35.687523] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:32:42.888 [2024-06-11 14:00:35.687557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.687577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.888 [2024-06-11 14:00:35.687587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.687610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:42.888 [2024-06-11 14:00:35.687631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831440, cid 4, qid 0 00:32:42.888 [2024-06-11 14:00:35.687640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18315c0, cid 5, qid 0 00:32:42.888 [2024-06-11 14:00:35.687775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:42.888 [2024-06-11 14:00:35.687785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:42.888 [2024-06-11 14:00:35.687792] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687799] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c5f00): datao=0, datal=1024, cccid=4 00:32:42.888 [2024-06-11 14:00:35.687807] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1831440) on tqpair(0x17c5f00): expected_datao=0, payload_size=1024 00:32:42.888 [2024-06-11 14:00:35.687815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687824] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687831] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.687848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.687855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.687862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18315c0) on tqpair=0x17c5f00 00:32:42.888 [2024-06-11 14:00:35.728667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.728683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.728690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.728698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831440) on tqpair=0x17c5f00 00:32:42.888 [2024-06-11 14:00:35.728722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.728729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.728741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.888 [2024-06-11 14:00:35.728766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831440, cid 4, qid 0 00:32:42.888 [2024-06-11 14:00:35.728936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:42.888 [2024-06-11 14:00:35.728946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:42.888 [2024-06-11 14:00:35.728953] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.728960] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c5f00): datao=0, datal=3072, cccid=4 00:32:42.888 [2024-06-11 14:00:35.728968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1831440) on tqpair(0x17c5f00): expected_datao=0, payload_size=3072 00:32:42.888 [2024-06-11 14:00:35.728977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729019] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729026] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.729105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.729112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831440) on tqpair=0x17c5f00 00:32:42.888 [2024-06-11 14:00:35.729131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17c5f00) 00:32:42.888 [2024-06-11 14:00:35.729149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.888 [2024-06-11 14:00:35.729171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1831440, cid 4, qid 0 00:32:42.888 [2024-06-11 14:00:35.729324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:42.888 [2024-06-11 14:00:35.729334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:42.888 [2024-06-11 14:00:35.729341] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17c5f00): datao=0, datal=8, cccid=4 00:32:42.888 [2024-06-11 14:00:35.729356] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1831440) on tqpair(0x17c5f00): expected_datao=0, payload_size=8 00:32:42.888 [2024-06-11 14:00:35.729364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729374] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.729381] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.769740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.888 [2024-06-11 14:00:35.769757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.888 [2024-06-11 14:00:35.769764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.888 [2024-06-11 14:00:35.769772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831440) on tqpair=0x17c5f00 00:32:42.888 ===================================================== 00:32:42.888 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:42.888 ===================================================== 00:32:42.888 Controller Capabilities/Features 00:32:42.888 ================================ 00:32:42.888 Vendor ID: 0000 00:32:42.888 Subsystem Vendor ID: 0000 00:32:42.888 Serial Number: .................... 00:32:42.888 Model Number: ........................................ 00:32:42.888 Firmware Version: 24.09 00:32:42.888 Recommended Arb Burst: 0 00:32:42.888 IEEE OUI Identifier: 00 00 00 00:32:42.888 Multi-path I/O 00:32:42.888 May have multiple subsystem ports: No 00:32:42.888 May have multiple controllers: No 00:32:42.888 Associated with SR-IOV VF: No 00:32:42.888 Max Data Transfer Size: 131072 00:32:42.888 Max Number of Namespaces: 0 00:32:42.888 Max Number of I/O Queues: 1024 00:32:42.888 NVMe Specification Version (VS): 1.3 00:32:42.888 NVMe Specification Version (Identify): 1.3 00:32:42.888 Maximum Queue Entries: 128 00:32:42.888 Contiguous Queues Required: Yes 00:32:42.888 Arbitration Mechanisms Supported 00:32:42.888 Weighted Round Robin: Not Supported 00:32:42.888 Vendor Specific: Not Supported 00:32:42.888 Reset Timeout: 15000 ms 00:32:42.888 Doorbell Stride: 4 bytes 00:32:42.888 NVM Subsystem Reset: Not Supported 00:32:42.888 Command Sets Supported 00:32:42.888 NVM Command Set: Supported 00:32:42.888 Boot Partition: Not Supported 00:32:42.889 Memory Page Size Minimum: 4096 bytes 00:32:42.889 Memory Page Size Maximum: 4096 bytes 00:32:42.889 Persistent Memory Region: Not Supported 00:32:42.889 Optional Asynchronous Events Supported 00:32:42.889 Namespace Attribute Notices: Not Supported 00:32:42.889 Firmware Activation Notices: Not Supported 00:32:42.889 ANA Change Notices: Not Supported 00:32:42.889 PLE Aggregate Log Change Notices: Not Supported 00:32:42.889 LBA Status Info Alert Notices: Not Supported 00:32:42.889 EGE Aggregate Log Change Notices: Not Supported 00:32:42.889 Normal NVM Subsystem Shutdown event: Not Supported 00:32:42.889 Zone Descriptor Change Notices: Not Supported 00:32:42.889 Discovery Log Change Notices: Supported 00:32:42.889 Controller Attributes 00:32:42.889 128-bit Host Identifier: Not Supported 00:32:42.889 Non-Operational Permissive Mode: Not Supported 00:32:42.889 NVM Sets: Not Supported 00:32:42.889 Read Recovery Levels: Not Supported 00:32:42.889 Endurance Groups: Not Supported 00:32:42.889 Predictable Latency Mode: Not Supported 00:32:42.889 Traffic Based Keep ALive: Not Supported 00:32:42.889 Namespace Granularity: Not Supported 00:32:42.889 SQ Associations: Not Supported 00:32:42.889 UUID List: Not Supported 00:32:42.889 Multi-Domain Subsystem: Not Supported 00:32:42.889 Fixed Capacity Management: Not Supported 00:32:42.889 Variable Capacity Management: Not Supported 00:32:42.889 Delete Endurance Group: Not Supported 00:32:42.889 Delete NVM Set: Not Supported 00:32:42.889 Extended LBA Formats Supported: Not Supported 00:32:42.889 Flexible Data Placement Supported: Not Supported 00:32:42.889 00:32:42.889 Controller Memory Buffer Support 00:32:42.889 ================================ 00:32:42.889 Supported: No 00:32:42.889 00:32:42.889 Persistent Memory Region Support 00:32:42.889 ================================ 00:32:42.889 Supported: No 00:32:42.889 00:32:42.889 Admin Command Set Attributes 00:32:42.889 ============================ 00:32:42.889 Security Send/Receive: Not Supported 00:32:42.889 Format NVM: Not Supported 00:32:42.889 Firmware Activate/Download: Not Supported 00:32:42.889 Namespace Management: Not Supported 00:32:42.889 Device Self-Test: Not Supported 00:32:42.889 Directives: Not Supported 00:32:42.889 NVMe-MI: Not Supported 00:32:42.889 Virtualization Management: Not Supported 00:32:42.889 Doorbell Buffer Config: Not Supported 00:32:42.889 Get LBA Status Capability: Not Supported 00:32:42.889 Command & Feature Lockdown Capability: Not Supported 00:32:42.889 Abort Command Limit: 1 00:32:42.889 Async Event Request Limit: 4 00:32:42.889 Number of Firmware Slots: N/A 00:32:42.889 Firmware Slot 1 Read-Only: N/A 00:32:42.889 Firmware Activation Without Reset: N/A 00:32:42.889 Multiple Update Detection Support: N/A 00:32:42.889 Firmware Update Granularity: No Information Provided 00:32:42.889 Per-Namespace SMART Log: No 00:32:42.889 Asymmetric Namespace Access Log Page: Not Supported 00:32:42.889 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:42.889 Command Effects Log Page: Not Supported 00:32:42.889 Get Log Page Extended Data: Supported 00:32:42.889 Telemetry Log Pages: Not Supported 00:32:42.889 Persistent Event Log Pages: Not Supported 00:32:42.889 Supported Log Pages Log Page: May Support 00:32:42.889 Commands Supported & Effects Log Page: Not Supported 00:32:42.889 Feature Identifiers & Effects Log Page:May Support 00:32:42.889 NVMe-MI Commands & Effects Log Page: May Support 00:32:42.889 Data Area 4 for Telemetry Log: Not Supported 00:32:42.889 Error Log Page Entries Supported: 128 00:32:42.889 Keep Alive: Not Supported 00:32:42.889 00:32:42.889 NVM Command Set Attributes 00:32:42.889 ========================== 00:32:42.889 Submission Queue Entry Size 00:32:42.889 Max: 1 00:32:42.889 Min: 1 00:32:42.889 Completion Queue Entry Size 00:32:42.889 Max: 1 00:32:42.889 Min: 1 00:32:42.889 Number of Namespaces: 0 00:32:42.889 Compare Command: Not Supported 00:32:42.889 Write Uncorrectable Command: Not Supported 00:32:42.889 Dataset Management Command: Not Supported 00:32:42.889 Write Zeroes Command: Not Supported 00:32:42.889 Set Features Save Field: Not Supported 00:32:42.889 Reservations: Not Supported 00:32:42.889 Timestamp: Not Supported 00:32:42.889 Copy: Not Supported 00:32:42.889 Volatile Write Cache: Not Present 00:32:42.889 Atomic Write Unit (Normal): 1 00:32:42.889 Atomic Write Unit (PFail): 1 00:32:42.889 Atomic Compare & Write Unit: 1 00:32:42.889 Fused Compare & Write: Supported 00:32:42.889 Scatter-Gather List 00:32:42.889 SGL Command Set: Supported 00:32:42.889 SGL Keyed: Supported 00:32:42.889 SGL Bit Bucket Descriptor: Not Supported 00:32:42.889 SGL Metadata Pointer: Not Supported 00:32:42.889 Oversized SGL: Not Supported 00:32:42.889 SGL Metadata Address: Not Supported 00:32:42.889 SGL Offset: Supported 00:32:42.889 Transport SGL Data Block: Not Supported 00:32:42.889 Replay Protected Memory Block: Not Supported 00:32:42.889 00:32:42.889 Firmware Slot Information 00:32:42.889 ========================= 00:32:42.889 Active slot: 0 00:32:42.889 00:32:42.889 00:32:42.889 Error Log 00:32:42.889 ========= 00:32:42.889 00:32:42.889 Active Namespaces 00:32:42.889 ================= 00:32:42.889 Discovery Log Page 00:32:42.889 ================== 00:32:42.889 Generation Counter: 2 00:32:42.889 Number of Records: 2 00:32:42.889 Record Format: 0 00:32:42.889 00:32:42.889 Discovery Log Entry 0 00:32:42.889 ---------------------- 00:32:42.889 Transport Type: 3 (TCP) 00:32:42.889 Address Family: 1 (IPv4) 00:32:42.889 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:42.889 Entry Flags: 00:32:42.889 Duplicate Returned Information: 1 00:32:42.889 Explicit Persistent Connection Support for Discovery: 1 00:32:42.889 Transport Requirements: 00:32:42.889 Secure Channel: Not Required 00:32:42.889 Port ID: 0 (0x0000) 00:32:42.889 Controller ID: 65535 (0xffff) 00:32:42.889 Admin Max SQ Size: 128 00:32:42.889 Transport Service Identifier: 4420 00:32:42.889 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:42.889 Transport Address: 10.0.0.2 00:32:42.889 Discovery Log Entry 1 00:32:42.889 ---------------------- 00:32:42.889 Transport Type: 3 (TCP) 00:32:42.889 Address Family: 1 (IPv4) 00:32:42.889 Subsystem Type: 2 (NVM Subsystem) 00:32:42.889 Entry Flags: 00:32:42.889 Duplicate Returned Information: 0 00:32:42.889 Explicit Persistent Connection Support for Discovery: 0 00:32:42.889 Transport Requirements: 00:32:42.889 Secure Channel: Not Required 00:32:42.889 Port ID: 0 (0x0000) 00:32:42.889 Controller ID: 65535 (0xffff) 00:32:42.889 Admin Max SQ Size: 128 00:32:42.889 Transport Service Identifier: 4420 00:32:42.889 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:32:42.889 Transport Address: 10.0.0.2 [2024-06-11 14:00:35.769885] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:32:42.889 [2024-06-11 14:00:35.769901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830e40) on tqpair=0x17c5f00 00:32:42.889 [2024-06-11 14:00:35.769911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.889 [2024-06-11 14:00:35.769921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1830fc0) on tqpair=0x17c5f00 00:32:42.889 [2024-06-11 14:00:35.769930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.889 [2024-06-11 14:00:35.769939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1831140) on tqpair=0x17c5f00 00:32:42.889 [2024-06-11 14:00:35.769947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.889 [2024-06-11 14:00:35.769955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.889 [2024-06-11 14:00:35.769963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:42.889 [2024-06-11 14:00:35.769975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.889 [2024-06-11 14:00:35.769983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.889 [2024-06-11 14:00:35.769990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.889 [2024-06-11 14:00:35.770002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.889 [2024-06-11 14:00:35.770022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.889 [2024-06-11 14:00:35.770146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.889 [2024-06-11 14:00:35.770156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.770163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.770180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.770203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.770223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.770414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.770423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.770430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.770444] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:32:42.890 [2024-06-11 14:00:35.770453] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:32:42.890 [2024-06-11 14:00:35.770468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.770499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.770515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.770641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.770652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.770658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.770680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.770704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.770720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.770820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.770830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.770836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.770857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.770871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.770881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.770899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.771009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.771019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.771025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.771046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.771070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.771085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.771187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.771197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.771204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.771224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.771248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.771264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.771369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.771379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.771385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.771406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.771430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.771445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.771596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.771606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.771613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.771634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771648] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.890 [2024-06-11 14:00:35.771658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.890 [2024-06-11 14:00:35.771677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.890 [2024-06-11 14:00:35.771821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.890 [2024-06-11 14:00:35.771830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.890 [2024-06-11 14:00:35.771837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.890 [2024-06-11 14:00:35.771857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.890 [2024-06-11 14:00:35.771871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.771881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.771897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.772065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.772080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.772243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.772258] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.772421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.772437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.772605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.772621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.772784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.772799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.772945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.772955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.772961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.772982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.772996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.773006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.773021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.773174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.773183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.773190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773196] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.773210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.773234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.773249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.773352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.773366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.773373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.773394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.773417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.773433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.773588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.773598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.773605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.773626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.773649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.773665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.773809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.773819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.773826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773832] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.773846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.773860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.773870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.773885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.773988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.773997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.774004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.774011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.774025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.774032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.774039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.891 [2024-06-11 14:00:35.774048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.891 [2024-06-11 14:00:35.774064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.891 [2024-06-11 14:00:35.774169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.891 [2024-06-11 14:00:35.774179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.891 [2024-06-11 14:00:35.774188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.774195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.891 [2024-06-11 14:00:35.774209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.891 [2024-06-11 14:00:35.774216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.774222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.892 [2024-06-11 14:00:35.774232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.892 [2024-06-11 14:00:35.774248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.892 [2024-06-11 14:00:35.774349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.892 [2024-06-11 14:00:35.774359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.892 [2024-06-11 14:00:35.774366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.774372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.892 [2024-06-11 14:00:35.774386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.774393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.774400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.892 [2024-06-11 14:00:35.774410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.892 [2024-06-11 14:00:35.774425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.892 [2024-06-11 14:00:35.778489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.892 [2024-06-11 14:00:35.778502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.892 [2024-06-11 14:00:35.778508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.778515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.892 [2024-06-11 14:00:35.778530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.778537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.778544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17c5f00) 00:32:42.892 [2024-06-11 14:00:35.778554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.892 [2024-06-11 14:00:35.778571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18312c0, cid 3, qid 0 00:32:42.892 [2024-06-11 14:00:35.778760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:42.892 [2024-06-11 14:00:35.778770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:42.892 [2024-06-11 14:00:35.778777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:42.892 [2024-06-11 14:00:35.778783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18312c0) on tqpair=0x17c5f00 00:32:42.892 [2024-06-11 14:00:35.778795] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:32:42.892 00:32:43.157 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:32:43.157 [2024-06-11 14:00:35.823372] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:43.157 [2024-06-11 14:00:35.823413] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575399 ] 00:32:43.157 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.157 [2024-06-11 14:00:35.858652] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:32:43.157 [2024-06-11 14:00:35.858705] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:32:43.157 [2024-06-11 14:00:35.858713] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:32:43.157 [2024-06-11 14:00:35.858728] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:32:43.157 [2024-06-11 14:00:35.858740] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:32:43.157 [2024-06-11 14:00:35.862518] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:32:43.157 [2024-06-11 14:00:35.862553] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1048f00 0 00:32:43.157 [2024-06-11 14:00:35.870491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:32:43.157 [2024-06-11 14:00:35.870507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:32:43.157 [2024-06-11 14:00:35.870514] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:32:43.157 [2024-06-11 14:00:35.870520] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:32:43.157 [2024-06-11 14:00:35.870564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.870572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.870578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.870593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:43.157 [2024-06-11 14:00:35.870615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.878486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.878507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.878514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.878534] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:32:43.157 [2024-06-11 14:00:35.878544] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:32:43.157 [2024-06-11 14:00:35.878552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:32:43.157 [2024-06-11 14:00:35.878568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.878593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.157 [2024-06-11 14:00:35.878612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.878849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.878859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.878865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.878880] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:32:43.157 [2024-06-11 14:00:35.878892] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:32:43.157 [2024-06-11 14:00:35.878906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.878919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.878929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.157 [2024-06-11 14:00:35.878945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.879041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.879051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.879057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.879072] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:32:43.157 [2024-06-11 14:00:35.879086] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.879119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.157 [2024-06-11 14:00:35.879135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.879273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.879282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.879288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.879303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.879341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.157 [2024-06-11 14:00:35.879357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.879452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.879461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.879467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.879486] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:32:43.157 [2024-06-11 14:00:35.879495] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879508] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879617] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:32:43.157 [2024-06-11 14:00:35.879627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.157 [2024-06-11 14:00:35.879662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.157 [2024-06-11 14:00:35.879678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.157 [2024-06-11 14:00:35.879817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.157 [2024-06-11 14:00:35.879826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.157 [2024-06-11 14:00:35.879832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.157 [2024-06-11 14:00:35.879846] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:32:43.157 [2024-06-11 14:00:35.879861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.157 [2024-06-11 14:00:35.879874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.879884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.158 [2024-06-11 14:00:35.879899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.158 [2024-06-11 14:00:35.879991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.158 [2024-06-11 14:00:35.880000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.158 [2024-06-11 14:00:35.880007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.158 [2024-06-11 14:00:35.880021] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:32:43.158 [2024-06-11 14:00:35.880029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880042] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:32:43.158 [2024-06-11 14:00:35.880054] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.158 [2024-06-11 14:00:35.880099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.158 [2024-06-11 14:00:35.880229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.158 [2024-06-11 14:00:35.880238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.158 [2024-06-11 14:00:35.880245] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=4096, cccid=0 00:32:43.158 [2024-06-11 14:00:35.880260] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b3e40) on tqpair(0x1048f00): expected_datao=0, payload_size=4096 00:32:43.158 [2024-06-11 14:00:35.880270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880281] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.158 [2024-06-11 14:00:35.880358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.158 [2024-06-11 14:00:35.880364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.158 [2024-06-11 14:00:35.880381] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:32:43.158 [2024-06-11 14:00:35.880389] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:32:43.158 [2024-06-11 14:00:35.880397] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:32:43.158 [2024-06-11 14:00:35.880404] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:32:43.158 [2024-06-11 14:00:35.880412] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:32:43.158 [2024-06-11 14:00:35.880420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:43.158 [2024-06-11 14:00:35.880497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.158 [2024-06-11 14:00:35.880593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.158 [2024-06-11 14:00:35.880603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.158 [2024-06-11 14:00:35.880609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.158 [2024-06-11 14:00:35.880625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.158 [2024-06-11 14:00:35.880657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.158 [2024-06-11 14:00:35.880689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.158 [2024-06-11 14:00:35.880723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.158 [2024-06-11 14:00:35.880753] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.880779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.880795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.158 [2024-06-11 14:00:35.880813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3e40, cid 0, qid 0 00:32:43.158 [2024-06-11 14:00:35.880821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b3fc0, cid 1, qid 0 00:32:43.158 [2024-06-11 14:00:35.880829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4140, cid 2, qid 0 00:32:43.158 [2024-06-11 14:00:35.880837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.158 [2024-06-11 14:00:35.880844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.158 [2024-06-11 14:00:35.880959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.158 [2024-06-11 14:00:35.880969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.158 [2024-06-11 14:00:35.880975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.880982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.158 [2024-06-11 14:00:35.880989] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:32:43.158 [2024-06-11 14:00:35.880998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.881011] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.881024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.881034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.881057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:43.158 [2024-06-11 14:00:35.881073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.158 [2024-06-11 14:00:35.881167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.158 [2024-06-11 14:00:35.881177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.158 [2024-06-11 14:00:35.881183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.158 [2024-06-11 14:00:35.881251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.881267] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:32:43.158 [2024-06-11 14:00:35.881278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.158 [2024-06-11 14:00:35.881294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.158 [2024-06-11 14:00:35.881310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.158 [2024-06-11 14:00:35.881431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.158 [2024-06-11 14:00:35.881441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.158 [2024-06-11 14:00:35.881447] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881454] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=4096, cccid=4 00:32:43.158 [2024-06-11 14:00:35.881462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b4440) on tqpair(0x1048f00): expected_datao=0, payload_size=4096 00:32:43.158 [2024-06-11 14:00:35.881470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.158 [2024-06-11 14:00:35.881484] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881491] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.881554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.881561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.881583] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:32:43.159 [2024-06-11 14:00:35.881608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.881622] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.881633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.881650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.881667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.159 [2024-06-11 14:00:35.881788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.159 [2024-06-11 14:00:35.881798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.159 [2024-06-11 14:00:35.881804] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881811] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=4096, cccid=4 00:32:43.159 [2024-06-11 14:00:35.881819] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b4440) on tqpair(0x1048f00): expected_datao=0, payload_size=4096 00:32:43.159 [2024-06-11 14:00:35.881826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881836] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881843] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.881910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.881919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.881941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.881956] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.881967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.881973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.881983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.881999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.159 [2024-06-11 14:00:35.882129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.159 [2024-06-11 14:00:35.882139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.159 [2024-06-11 14:00:35.882146] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882152] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=4096, cccid=4 00:32:43.159 [2024-06-11 14:00:35.882160] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b4440) on tqpair(0x1048f00): expected_datao=0, payload_size=4096 00:32:43.159 [2024-06-11 14:00:35.882168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882177] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882184] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.882251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.882257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.882274] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882288] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882310] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882319] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882328] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:32:43.159 [2024-06-11 14:00:35.882336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:32:43.159 [2024-06-11 14:00:35.882345] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:32:43.159 [2024-06-11 14:00:35.882366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.882382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.882395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.882408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.882417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.159 [2024-06-11 14:00:35.882437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.159 [2024-06-11 14:00:35.882445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b45c0, cid 5, qid 0 00:32:43.159 [2024-06-11 14:00:35.886487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.886498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.886504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.886511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.886521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.886530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.886536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.886543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b45c0) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.886557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.886564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.886574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.886591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b45c0, cid 5, qid 0 00:32:43.159 [2024-06-11 14:00:35.886855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.886865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.886871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.886878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b45c0) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.886892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.886899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.886908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.886924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b45c0, cid 5, qid 0 00:32:43.159 [2024-06-11 14:00:35.887088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.159 [2024-06-11 14:00:35.887097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.159 [2024-06-11 14:00:35.887103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.887110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b45c0) on tqpair=0x1048f00 00:32:43.159 [2024-06-11 14:00:35.887124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.159 [2024-06-11 14:00:35.887131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1048f00) 00:32:43.159 [2024-06-11 14:00:35.887141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.159 [2024-06-11 14:00:35.887156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b45c0, cid 5, qid 0 00:32:43.159 [2024-06-11 14:00:35.887261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.160 [2024-06-11 14:00:35.887270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.160 [2024-06-11 14:00:35.887279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b45c0) on tqpair=0x1048f00 00:32:43.160 [2024-06-11 14:00:35.887303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1048f00) 00:32:43.160 [2024-06-11 14:00:35.887320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.160 [2024-06-11 14:00:35.887330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1048f00) 00:32:43.160 [2024-06-11 14:00:35.887346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.160 [2024-06-11 14:00:35.887357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1048f00) 00:32:43.160 [2024-06-11 14:00:35.887372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.160 [2024-06-11 14:00:35.887383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1048f00) 00:32:43.160 [2024-06-11 14:00:35.887399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.160 [2024-06-11 14:00:35.887415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b45c0, cid 5, qid 0 00:32:43.160 [2024-06-11 14:00:35.887424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4440, cid 4, qid 0 00:32:43.160 [2024-06-11 14:00:35.887431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b4740, cid 6, qid 0 00:32:43.160 [2024-06-11 14:00:35.887439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b48c0, cid 7, qid 0 00:32:43.160 [2024-06-11 14:00:35.887629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.160 [2024-06-11 14:00:35.887640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.160 [2024-06-11 14:00:35.887646] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887652] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=8192, cccid=5 00:32:43.160 [2024-06-11 14:00:35.887661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b45c0) on tqpair(0x1048f00): expected_datao=0, payload_size=8192 00:32:43.160 [2024-06-11 14:00:35.887669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887679] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887685] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.160 [2024-06-11 14:00:35.887702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.160 [2024-06-11 14:00:35.887709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=512, cccid=4 00:32:43.160 [2024-06-11 14:00:35.887723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b4440) on tqpair(0x1048f00): expected_datao=0, payload_size=512 00:32:43.160 [2024-06-11 14:00:35.887731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887740] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887747] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.160 [2024-06-11 14:00:35.887767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.160 [2024-06-11 14:00:35.887773] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887779] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=512, cccid=6 00:32:43.160 [2024-06-11 14:00:35.887787] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b4740) on tqpair(0x1048f00): expected_datao=0, payload_size=512 00:32:43.160 [2024-06-11 14:00:35.887795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887804] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887811] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:32:43.160 [2024-06-11 14:00:35.887828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:32:43.160 [2024-06-11 14:00:35.887834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1048f00): datao=0, datal=4096, cccid=7 00:32:43.160 [2024-06-11 14:00:35.887848] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10b48c0) on tqpair(0x1048f00): expected_datao=0, payload_size=4096 00:32:43.160 [2024-06-11 14:00:35.887856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887877] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887884] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.160 [2024-06-11 14:00:35.887904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.160 [2024-06-11 14:00:35.887910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b45c0) on tqpair=0x1048f00 00:32:43.160 [2024-06-11 14:00:35.887934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.160 [2024-06-11 14:00:35.887943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.160 [2024-06-11 14:00:35.887949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4440) on tqpair=0x1048f00 00:32:43.160 [2024-06-11 14:00:35.887969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.160 [2024-06-11 14:00:35.887978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.160 [2024-06-11 14:00:35.887984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.887990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4740) on tqpair=0x1048f00 00:32:43.160 [2024-06-11 14:00:35.888003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.160 [2024-06-11 14:00:35.888012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.160 [2024-06-11 14:00:35.888018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.160 [2024-06-11 14:00:35.888025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b48c0) on tqpair=0x1048f00 00:32:43.160 ===================================================== 00:32:43.160 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:43.160 ===================================================== 00:32:43.160 Controller Capabilities/Features 00:32:43.160 ================================ 00:32:43.160 Vendor ID: 8086 00:32:43.160 Subsystem Vendor ID: 8086 00:32:43.160 Serial Number: SPDK00000000000001 00:32:43.160 Model Number: SPDK bdev Controller 00:32:43.160 Firmware Version: 24.09 00:32:43.160 Recommended Arb Burst: 6 00:32:43.160 IEEE OUI Identifier: e4 d2 5c 00:32:43.160 Multi-path I/O 00:32:43.160 May have multiple subsystem ports: Yes 00:32:43.160 May have multiple controllers: Yes 00:32:43.160 Associated with SR-IOV VF: No 00:32:43.160 Max Data Transfer Size: 131072 00:32:43.160 Max Number of Namespaces: 32 00:32:43.160 Max Number of I/O Queues: 127 00:32:43.160 NVMe Specification Version (VS): 1.3 00:32:43.160 NVMe Specification Version (Identify): 1.3 00:32:43.160 Maximum Queue Entries: 128 00:32:43.160 Contiguous Queues Required: Yes 00:32:43.160 Arbitration Mechanisms Supported 00:32:43.160 Weighted Round Robin: Not Supported 00:32:43.160 Vendor Specific: Not Supported 00:32:43.160 Reset Timeout: 15000 ms 00:32:43.160 Doorbell Stride: 4 bytes 00:32:43.160 NVM Subsystem Reset: Not Supported 00:32:43.160 Command Sets Supported 00:32:43.160 NVM Command Set: Supported 00:32:43.160 Boot Partition: Not Supported 00:32:43.161 Memory Page Size Minimum: 4096 bytes 00:32:43.161 Memory Page Size Maximum: 4096 bytes 00:32:43.161 Persistent Memory Region: Not Supported 00:32:43.161 Optional Asynchronous Events Supported 00:32:43.161 Namespace Attribute Notices: Supported 00:32:43.161 Firmware Activation Notices: Not Supported 00:32:43.161 ANA Change Notices: Not Supported 00:32:43.161 PLE Aggregate Log Change Notices: Not Supported 00:32:43.161 LBA Status Info Alert Notices: Not Supported 00:32:43.161 EGE Aggregate Log Change Notices: Not Supported 00:32:43.161 Normal NVM Subsystem Shutdown event: Not Supported 00:32:43.161 Zone Descriptor Change Notices: Not Supported 00:32:43.161 Discovery Log Change Notices: Not Supported 00:32:43.161 Controller Attributes 00:32:43.161 128-bit Host Identifier: Supported 00:32:43.161 Non-Operational Permissive Mode: Not Supported 00:32:43.161 NVM Sets: Not Supported 00:32:43.161 Read Recovery Levels: Not Supported 00:32:43.161 Endurance Groups: Not Supported 00:32:43.161 Predictable Latency Mode: Not Supported 00:32:43.161 Traffic Based Keep ALive: Not Supported 00:32:43.161 Namespace Granularity: Not Supported 00:32:43.161 SQ Associations: Not Supported 00:32:43.161 UUID List: Not Supported 00:32:43.161 Multi-Domain Subsystem: Not Supported 00:32:43.161 Fixed Capacity Management: Not Supported 00:32:43.161 Variable Capacity Management: Not Supported 00:32:43.161 Delete Endurance Group: Not Supported 00:32:43.161 Delete NVM Set: Not Supported 00:32:43.161 Extended LBA Formats Supported: Not Supported 00:32:43.161 Flexible Data Placement Supported: Not Supported 00:32:43.161 00:32:43.161 Controller Memory Buffer Support 00:32:43.161 ================================ 00:32:43.161 Supported: No 00:32:43.161 00:32:43.161 Persistent Memory Region Support 00:32:43.161 ================================ 00:32:43.161 Supported: No 00:32:43.161 00:32:43.161 Admin Command Set Attributes 00:32:43.161 ============================ 00:32:43.161 Security Send/Receive: Not Supported 00:32:43.161 Format NVM: Not Supported 00:32:43.161 Firmware Activate/Download: Not Supported 00:32:43.161 Namespace Management: Not Supported 00:32:43.161 Device Self-Test: Not Supported 00:32:43.161 Directives: Not Supported 00:32:43.161 NVMe-MI: Not Supported 00:32:43.161 Virtualization Management: Not Supported 00:32:43.161 Doorbell Buffer Config: Not Supported 00:32:43.161 Get LBA Status Capability: Not Supported 00:32:43.161 Command & Feature Lockdown Capability: Not Supported 00:32:43.161 Abort Command Limit: 4 00:32:43.161 Async Event Request Limit: 4 00:32:43.161 Number of Firmware Slots: N/A 00:32:43.161 Firmware Slot 1 Read-Only: N/A 00:32:43.161 Firmware Activation Without Reset: N/A 00:32:43.161 Multiple Update Detection Support: N/A 00:32:43.161 Firmware Update Granularity: No Information Provided 00:32:43.161 Per-Namespace SMART Log: No 00:32:43.161 Asymmetric Namespace Access Log Page: Not Supported 00:32:43.161 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:32:43.161 Command Effects Log Page: Supported 00:32:43.161 Get Log Page Extended Data: Supported 00:32:43.161 Telemetry Log Pages: Not Supported 00:32:43.161 Persistent Event Log Pages: Not Supported 00:32:43.161 Supported Log Pages Log Page: May Support 00:32:43.161 Commands Supported & Effects Log Page: Not Supported 00:32:43.161 Feature Identifiers & Effects Log Page:May Support 00:32:43.161 NVMe-MI Commands & Effects Log Page: May Support 00:32:43.161 Data Area 4 for Telemetry Log: Not Supported 00:32:43.161 Error Log Page Entries Supported: 128 00:32:43.161 Keep Alive: Supported 00:32:43.161 Keep Alive Granularity: 10000 ms 00:32:43.161 00:32:43.161 NVM Command Set Attributes 00:32:43.161 ========================== 00:32:43.161 Submission Queue Entry Size 00:32:43.161 Max: 64 00:32:43.161 Min: 64 00:32:43.161 Completion Queue Entry Size 00:32:43.161 Max: 16 00:32:43.161 Min: 16 00:32:43.161 Number of Namespaces: 32 00:32:43.161 Compare Command: Supported 00:32:43.161 Write Uncorrectable Command: Not Supported 00:32:43.161 Dataset Management Command: Supported 00:32:43.161 Write Zeroes Command: Supported 00:32:43.161 Set Features Save Field: Not Supported 00:32:43.161 Reservations: Supported 00:32:43.161 Timestamp: Not Supported 00:32:43.161 Copy: Supported 00:32:43.161 Volatile Write Cache: Present 00:32:43.161 Atomic Write Unit (Normal): 1 00:32:43.161 Atomic Write Unit (PFail): 1 00:32:43.161 Atomic Compare & Write Unit: 1 00:32:43.161 Fused Compare & Write: Supported 00:32:43.161 Scatter-Gather List 00:32:43.161 SGL Command Set: Supported 00:32:43.161 SGL Keyed: Supported 00:32:43.161 SGL Bit Bucket Descriptor: Not Supported 00:32:43.161 SGL Metadata Pointer: Not Supported 00:32:43.161 Oversized SGL: Not Supported 00:32:43.161 SGL Metadata Address: Not Supported 00:32:43.161 SGL Offset: Supported 00:32:43.161 Transport SGL Data Block: Not Supported 00:32:43.161 Replay Protected Memory Block: Not Supported 00:32:43.161 00:32:43.161 Firmware Slot Information 00:32:43.161 ========================= 00:32:43.161 Active slot: 1 00:32:43.161 Slot 1 Firmware Revision: 24.09 00:32:43.161 00:32:43.161 00:32:43.161 Commands Supported and Effects 00:32:43.161 ============================== 00:32:43.161 Admin Commands 00:32:43.161 -------------- 00:32:43.161 Get Log Page (02h): Supported 00:32:43.161 Identify (06h): Supported 00:32:43.161 Abort (08h): Supported 00:32:43.161 Set Features (09h): Supported 00:32:43.161 Get Features (0Ah): Supported 00:32:43.161 Asynchronous Event Request (0Ch): Supported 00:32:43.161 Keep Alive (18h): Supported 00:32:43.161 I/O Commands 00:32:43.161 ------------ 00:32:43.161 Flush (00h): Supported LBA-Change 00:32:43.161 Write (01h): Supported LBA-Change 00:32:43.161 Read (02h): Supported 00:32:43.161 Compare (05h): Supported 00:32:43.161 Write Zeroes (08h): Supported LBA-Change 00:32:43.161 Dataset Management (09h): Supported LBA-Change 00:32:43.161 Copy (19h): Supported LBA-Change 00:32:43.161 Unknown (79h): Supported LBA-Change 00:32:43.161 Unknown (7Ah): Supported 00:32:43.161 00:32:43.161 Error Log 00:32:43.161 ========= 00:32:43.161 00:32:43.161 Arbitration 00:32:43.161 =========== 00:32:43.161 Arbitration Burst: 1 00:32:43.161 00:32:43.161 Power Management 00:32:43.161 ================ 00:32:43.161 Number of Power States: 1 00:32:43.161 Current Power State: Power State #0 00:32:43.161 Power State #0: 00:32:43.161 Max Power: 0.00 W 00:32:43.161 Non-Operational State: Operational 00:32:43.161 Entry Latency: Not Reported 00:32:43.161 Exit Latency: Not Reported 00:32:43.161 Relative Read Throughput: 0 00:32:43.162 Relative Read Latency: 0 00:32:43.162 Relative Write Throughput: 0 00:32:43.162 Relative Write Latency: 0 00:32:43.162 Idle Power: Not Reported 00:32:43.162 Active Power: Not Reported 00:32:43.162 Non-Operational Permissive Mode: Not Supported 00:32:43.162 00:32:43.162 Health Information 00:32:43.162 ================== 00:32:43.162 Critical Warnings: 00:32:43.162 Available Spare Space: OK 00:32:43.162 Temperature: OK 00:32:43.162 Device Reliability: OK 00:32:43.162 Read Only: No 00:32:43.162 Volatile Memory Backup: OK 00:32:43.162 Current Temperature: 0 Kelvin (-273 Celsius) 00:32:43.162 Temperature Threshold: [2024-06-11 14:00:35.888141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.888160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.888178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b48c0, cid 7, qid 0 00:32:43.162 [2024-06-11 14:00:35.888277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.888287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.888295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b48c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888341] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:32:43.162 [2024-06-11 14:00:35.888355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3e40) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.162 [2024-06-11 14:00:35.888373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b3fc0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.162 [2024-06-11 14:00:35.888389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b4140) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.162 [2024-06-11 14:00:35.888406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.162 [2024-06-11 14:00:35.888425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.888448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.888466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.162 [2024-06-11 14:00:35.888572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.888582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.888588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.888628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.888648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.162 [2024-06-11 14:00:35.888779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.888788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.888795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.888809] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:32:43.162 [2024-06-11 14:00:35.888817] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:32:43.162 [2024-06-11 14:00:35.888831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.888844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.888854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.888872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.162 [2024-06-11 14:00:35.888981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.888990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.888997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.889017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.889040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.889055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.162 [2024-06-11 14:00:35.889145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.889155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.889161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.889182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.889204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.889220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.162 [2024-06-11 14:00:35.889334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.162 [2024-06-11 14:00:35.889343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.162 [2024-06-11 14:00:35.889350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.162 [2024-06-11 14:00:35.889370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.162 [2024-06-11 14:00:35.889383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.162 [2024-06-11 14:00:35.889393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.162 [2024-06-11 14:00:35.889408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.889535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.889545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.889551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.889572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.889595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.889613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.889737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.889746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.889752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.889773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.889796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.889811] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.889904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.889913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.889920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.889940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.889953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.889963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.889978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.890088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.890097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.890104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.890124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.890147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.890162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.890290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.890299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.890306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890312] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.890326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.890339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.890349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.890364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.890457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.890466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.890473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.894488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.894505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.894512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.894518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1048f00) 00:32:43.163 [2024-06-11 14:00:35.894529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.163 [2024-06-11 14:00:35.894546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10b42c0, cid 3, qid 0 00:32:43.163 [2024-06-11 14:00:35.894722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:32:43.163 [2024-06-11 14:00:35.894732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:32:43.163 [2024-06-11 14:00:35.894738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:32:43.163 [2024-06-11 14:00:35.894745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10b42c0) on tqpair=0x1048f00 00:32:43.163 [2024-06-11 14:00:35.894756] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:32:43.163 0 Kelvin (-273 Celsius) 00:32:43.163 Available Spare: 0% 00:32:43.163 Available Spare Threshold: 0% 00:32:43.163 Life Percentage Used: 0% 00:32:43.163 Data Units Read: 0 00:32:43.163 Data Units Written: 0 00:32:43.163 Host Read Commands: 0 00:32:43.163 Host Write Commands: 0 00:32:43.163 Controller Busy Time: 0 minutes 00:32:43.163 Power Cycles: 0 00:32:43.163 Power On Hours: 0 hours 00:32:43.163 Unsafe Shutdowns: 0 00:32:43.163 Unrecoverable Media Errors: 0 00:32:43.163 Lifetime Error Log Entries: 0 00:32:43.163 Warning Temperature Time: 0 minutes 00:32:43.163 Critical Temperature Time: 0 minutes 00:32:43.163 00:32:43.163 Number of Queues 00:32:43.163 ================ 00:32:43.163 Number of I/O Submission Queues: 127 00:32:43.163 Number of I/O Completion Queues: 127 00:32:43.163 00:32:43.163 Active Namespaces 00:32:43.163 ================= 00:32:43.163 Namespace ID:1 00:32:43.163 Error Recovery Timeout: Unlimited 00:32:43.163 Command Set Identifier: NVM (00h) 00:32:43.163 Deallocate: Supported 00:32:43.163 Deallocated/Unwritten Error: Not Supported 00:32:43.163 Deallocated Read Value: Unknown 00:32:43.163 Deallocate in Write Zeroes: Not Supported 00:32:43.163 Deallocated Guard Field: 0xFFFF 00:32:43.163 Flush: Supported 00:32:43.163 Reservation: Supported 00:32:43.163 Namespace Sharing Capabilities: Multiple Controllers 00:32:43.163 Size (in LBAs): 131072 (0GiB) 00:32:43.163 Capacity (in LBAs): 131072 (0GiB) 00:32:43.163 Utilization (in LBAs): 131072 (0GiB) 00:32:43.163 NGUID: ABCDEF0123456789ABCDEF0123456789 00:32:43.163 EUI64: ABCDEF0123456789 00:32:43.163 UUID: 46fd6574-7e73-40e8-97c3-a08a22e7e33c 00:32:43.163 Thin Provisioning: Not Supported 00:32:43.163 Per-NS Atomic Units: Yes 00:32:43.163 Atomic Boundary Size (Normal): 0 00:32:43.163 Atomic Boundary Size (PFail): 0 00:32:43.163 Atomic Boundary Offset: 0 00:32:43.163 Maximum Single Source Range Length: 65535 00:32:43.163 Maximum Copy Length: 65535 00:32:43.163 Maximum Source Range Count: 1 00:32:43.163 NGUID/EUI64 Never Reused: No 00:32:43.163 Namespace Write Protected: No 00:32:43.163 Number of LBA Formats: 1 00:32:43.163 Current LBA Format: LBA Format #00 00:32:43.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:43.163 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:43.163 rmmod nvme_tcp 00:32:43.163 rmmod nvme_fabrics 00:32:43.163 rmmod nvme_keyring 00:32:43.163 14:00:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1575114 ']' 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1575114 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1575114 ']' 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1575114 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:32:43.163 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:43.164 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1575114 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1575114' 00:32:43.469 killing process with pid 1575114 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1575114 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1575114 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:43.469 14:00:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.028 14:00:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:46.028 00:32:46.028 real 0m10.431s 00:32:46.028 user 0m7.826s 00:32:46.028 sys 0m5.365s 00:32:46.028 14:00:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:46.028 14:00:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:32:46.028 ************************************ 00:32:46.029 END TEST nvmf_identify 00:32:46.029 ************************************ 00:32:46.029 14:00:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:46.029 14:00:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:46.029 14:00:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:46.029 14:00:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:46.029 ************************************ 00:32:46.029 START TEST nvmf_perf 00:32:46.029 ************************************ 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:32:46.029 * Looking for test storage... 00:32:46.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:46.029 14:00:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:52.608 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:52.608 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.608 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:52.609 Found net devices under 0000:af:00.0: cvl_0_0 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:52.609 Found net devices under 0000:af:00.1: cvl_0_1 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:52.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:32:52.609 00:32:52.609 --- 10.0.0.2 ping statistics --- 00:32:52.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.609 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:32:52.609 00:32:52.609 --- 10.0.0.1 ping statistics --- 00:32:52.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.609 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1579042 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1579042 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1579042 ']' 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:52.609 14:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:52.609 [2024-06-11 14:00:45.515158] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:32:52.609 [2024-06-11 14:00:45.515224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.868 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.868 [2024-06-11 14:00:45.627763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.868 [2024-06-11 14:00:45.714168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.868 [2024-06-11 14:00:45.714210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.868 [2024-06-11 14:00:45.714223] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.868 [2024-06-11 14:00:45.714235] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.868 [2024-06-11 14:00:45.714244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.868 [2024-06-11 14:00:45.714312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.868 [2024-06-11 14:00:45.714405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.868 [2024-06-11 14:00:45.714524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.868 [2024-06-11 14:00:45.714525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:53.804 14:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:32:57.097 14:00:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:32:57.097 14:00:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:32:57.097 14:00:49 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:32:57.097 14:00:49 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:57.356 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:32:57.356 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:32:57.356 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:32:57.356 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:32:57.356 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:57.615 [2024-06-11 14:00:50.274145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.615 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:57.875 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:57.875 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:57.875 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:32:57.875 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:58.135 14:00:50 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.394 [2024-06-11 14:00:51.117452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.394 14:00:51 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:58.655 14:00:51 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:32:58.655 14:00:51 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:32:58.655 14:00:51 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:32:58.655 14:00:51 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:33:00.033 Initializing NVMe Controllers 00:33:00.033 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:33:00.033 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:33:00.033 Initialization complete. Launching workers. 00:33:00.033 ======================================================== 00:33:00.033 Latency(us) 00:33:00.033 Device Information : IOPS MiB/s Average min max 00:33:00.033 PCIE (0000:d8:00.0) NSID 1 from core 0: 76972.81 300.68 415.32 48.94 5260.98 00:33:00.033 ======================================================== 00:33:00.033 Total : 76972.81 300.68 415.32 48.94 5260.98 00:33:00.033 00:33:00.033 14:00:52 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:00.033 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.412 Initializing NVMe Controllers 00:33:01.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:01.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:01.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:01.412 Initialization complete. Launching workers. 00:33:01.412 ======================================================== 00:33:01.412 Latency(us) 00:33:01.412 Device Information : IOPS MiB/s Average min max 00:33:01.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10318.59 200.91 45414.53 00:33:01.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18862.37 7957.56 47885.38 00:33:01.412 ======================================================== 00:33:01.412 Total : 152.00 0.59 13410.09 200.91 47885.38 00:33:01.412 00:33:01.412 14:00:54 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.791 Initializing NVMe Controllers 00:33:02.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:02.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:02.791 Initialization complete. Launching workers. 00:33:02.791 ======================================================== 00:33:02.791 Latency(us) 00:33:02.791 Device Information : IOPS MiB/s Average min max 00:33:02.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8590.96 33.56 3725.28 666.58 9960.31 00:33:02.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.83 14.84 8434.26 6459.12 17025.99 00:33:02.791 ======================================================== 00:33:02.791 Total : 12390.78 48.40 5169.37 666.58 17025.99 00:33:02.791 00:33:02.791 14:00:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:33:02.791 14:00:55 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:33:02.791 14:00:55 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:03.050 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.587 Initializing NVMe Controllers 00:33:05.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.587 Controller IO queue size 128, less than required. 00:33:05.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.587 Controller IO queue size 128, less than required. 00:33:05.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:05.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:05.587 Initialization complete. Launching workers. 00:33:05.587 ======================================================== 00:33:05.587 Latency(us) 00:33:05.587 Device Information : IOPS MiB/s Average min max 00:33:05.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 949.87 237.47 138955.61 69271.55 252249.84 00:33:05.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.92 149.98 227485.11 112551.59 391162.91 00:33:05.587 ======================================================== 00:33:05.587 Total : 1549.79 387.45 173225.09 69271.55 391162.91 00:33:05.587 00:33:05.587 14:00:58 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:05.587 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.847 No valid NVMe controllers or AIO or URING devices found 00:33:05.847 Initializing NVMe Controllers 00:33:05.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.847 Controller IO queue size 128, less than required. 00:33:05.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.847 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:05.847 Controller IO queue size 128, less than required. 00:33:05.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.847 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:33:05.847 WARNING: Some requested NVMe devices were skipped 00:33:05.847 14:00:58 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:05.847 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.381 Initializing NVMe Controllers 00:33:08.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:08.381 Controller IO queue size 128, less than required. 00:33:08.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:08.381 Controller IO queue size 128, less than required. 00:33:08.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:08.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:08.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:08.381 Initialization complete. Launching workers. 00:33:08.381 00:33:08.381 ==================== 00:33:08.381 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:08.381 TCP transport: 00:33:08.381 polls: 25670 00:33:08.381 idle_polls: 7490 00:33:08.381 sock_completions: 18180 00:33:08.381 nvme_completions: 3997 00:33:08.381 submitted_requests: 5930 00:33:08.381 queued_requests: 1 00:33:08.381 00:33:08.381 ==================== 00:33:08.381 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:08.381 TCP transport: 00:33:08.381 polls: 23034 00:33:08.381 idle_polls: 5228 00:33:08.381 sock_completions: 17806 00:33:08.381 nvme_completions: 4137 00:33:08.381 submitted_requests: 6254 00:33:08.381 queued_requests: 1 00:33:08.381 ======================================================== 00:33:08.381 Latency(us) 00:33:08.381 Device Information : IOPS MiB/s Average min max 00:33:08.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 999.00 249.75 134169.35 85819.75 218252.77 00:33:08.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1033.99 258.50 125856.87 71804.66 181736.46 00:33:08.381 ======================================================== 00:33:08.381 Total : 2032.99 508.25 129941.55 71804.66 218252.77 00:33:08.381 00:33:08.381 14:01:01 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:08.381 14:01:01 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.641 14:01:01 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:33:08.641 14:01:01 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:33:08.641 14:01:01 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3ee0cce7-2901-4578-acf4-e0e7aeb95db4 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3ee0cce7-2901-4578-acf4-e0e7aeb95db4 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=3ee0cce7-2901-4578-acf4-e0e7aeb95db4 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:33:13.942 { 00:33:13.942 "uuid": "3ee0cce7-2901-4578-acf4-e0e7aeb95db4", 00:33:13.942 "name": "lvs_0", 00:33:13.942 "base_bdev": "Nvme0n1", 00:33:13.942 "total_data_clusters": 381173, 00:33:13.942 "free_clusters": 381173, 00:33:13.942 "block_size": 512, 00:33:13.942 "cluster_size": 4194304 00:33:13.942 } 00:33:13.942 ]' 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="3ee0cce7-2901-4578-acf4-e0e7aeb95db4") .free_clusters' 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=381173 00:33:13.942 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3ee0cce7-2901-4578-acf4-e0e7aeb95db4") .cluster_size' 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1524692 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1524692 00:33:14.201 1524692 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:33:14.201 14:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ee0cce7-2901-4578-acf4-e0e7aeb95db4 lbd_0 20480 00:33:14.481 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e945d625-727a-4ef5-a5a1-be2d9c696826 00:33:14.481 14:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e945d625-727a-4ef5-a5a1-be2d9c696826 lvs_n_0 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=71b2ac6e-a30d-47fa-a15f-879f91f87597 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 71b2ac6e-a30d-47fa-a15f-879f91f87597 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=71b2ac6e-a30d-47fa-a15f-879f91f87597 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:33:15.859 { 00:33:15.859 "uuid": "3ee0cce7-2901-4578-acf4-e0e7aeb95db4", 00:33:15.859 "name": "lvs_0", 00:33:15.859 "base_bdev": "Nvme0n1", 00:33:15.859 "total_data_clusters": 381173, 00:33:15.859 "free_clusters": 376053, 00:33:15.859 "block_size": 512, 00:33:15.859 "cluster_size": 4194304 00:33:15.859 }, 00:33:15.859 { 00:33:15.859 "uuid": "71b2ac6e-a30d-47fa-a15f-879f91f87597", 00:33:15.859 "name": "lvs_n_0", 00:33:15.859 "base_bdev": "e945d625-727a-4ef5-a5a1-be2d9c696826", 00:33:15.859 "total_data_clusters": 5114, 00:33:15.859 "free_clusters": 5114, 00:33:15.859 "block_size": 512, 00:33:15.859 "cluster_size": 4194304 00:33:15.859 } 00:33:15.859 ]' 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="71b2ac6e-a30d-47fa-a15f-879f91f87597") .free_clusters' 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="71b2ac6e-a30d-47fa-a15f-879f91f87597") .cluster_size' 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:33:15.859 20456 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:33:15.859 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 71b2ac6e-a30d-47fa-a15f-879f91f87597 lbd_nest_0 20456 00:33:16.118 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f1bd3d7c-a34d-4be3-8c85-9a031d46a065 00:33:16.118 14:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:16.377 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:33:16.377 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f1bd3d7c-a34d-4be3-8c85-9a031d46a065 00:33:16.636 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.897 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:33:16.897 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:33:16.897 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:16.897 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:16.897 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:16.898 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.107 Initializing NVMe Controllers 00:33:29.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:29.107 Initialization complete. Launching workers. 00:33:29.107 ======================================================== 00:33:29.107 Latency(us) 00:33:29.107 Device Information : IOPS MiB/s Average min max 00:33:29.107 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.00 0.02 20882.76 228.64 45614.10 00:33:29.107 ======================================================== 00:33:29.108 Total : 48.00 0.02 20882.76 228.64 45614.10 00:33:29.108 00:33:29.108 14:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:29.108 14:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:29.108 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.091 Initializing NVMe Controllers 00:33:39.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:39.091 Initialization complete. Launching workers. 00:33:39.091 ======================================================== 00:33:39.091 Latency(us) 00:33:39.091 Device Information : IOPS MiB/s Average min max 00:33:39.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.20 9.90 12644.30 5044.23 51872.13 00:33:39.092 ======================================================== 00:33:39.092 Total : 79.20 9.90 12644.30 5044.23 51872.13 00:33:39.092 00:33:39.092 14:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:39.092 14:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:39.092 14:01:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:39.092 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.078 Initializing NVMe Controllers 00:33:49.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:49.078 Initialization complete. Launching workers. 00:33:49.078 ======================================================== 00:33:49.078 Latency(us) 00:33:49.078 Device Information : IOPS MiB/s Average min max 00:33:49.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7038.46 3.44 4545.71 323.21 12282.37 00:33:49.078 ======================================================== 00:33:49.078 Total : 7038.46 3.44 4545.71 323.21 12282.37 00:33:49.078 00:33:49.078 14:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:49.078 14:01:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:49.078 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.056 Initializing NVMe Controllers 00:33:59.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:59.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:59.056 Initialization complete. Launching workers. 00:33:59.056 ======================================================== 00:33:59.056 Latency(us) 00:33:59.056 Device Information : IOPS MiB/s Average min max 00:33:59.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1937.20 242.15 16527.06 1206.12 39163.80 00:33:59.056 ======================================================== 00:33:59.056 Total : 1937.20 242.15 16527.06 1206.12 39163.80 00:33:59.056 00:33:59.056 14:01:51 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:33:59.056 14:01:51 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:59.056 14:01:51 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:59.057 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.097 Initializing NVMe Controllers 00:34:09.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.097 Controller IO queue size 128, less than required. 00:34:09.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:09.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:09.097 Initialization complete. Launching workers. 00:34:09.097 ======================================================== 00:34:09.097 Latency(us) 00:34:09.097 Device Information : IOPS MiB/s Average min max 00:34:09.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11982.55 5.85 10683.04 1928.83 24748.95 00:34:09.098 ======================================================== 00:34:09.098 Total : 11982.55 5.85 10683.04 1928.83 24748.95 00:34:09.098 00:34:09.098 14:02:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:09.098 14:02:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:09.098 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.080 Initializing NVMe Controllers 00:34:19.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:19.080 Controller IO queue size 128, less than required. 00:34:19.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:19.080 Initialization complete. Launching workers. 00:34:19.080 ======================================================== 00:34:19.080 Latency(us) 00:34:19.080 Device Information : IOPS MiB/s Average min max 00:34:19.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.40 150.67 106890.99 24100.62 186331.44 00:34:19.080 ======================================================== 00:34:19.080 Total : 1205.40 150.67 106890.99 24100.62 186331.44 00:34:19.080 00:34:19.080 14:02:11 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:19.339 14:02:12 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f1bd3d7c-a34d-4be3-8c85-9a031d46a065 00:34:20.277 14:02:12 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:34:20.277 14:02:13 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e945d625-727a-4ef5-a5a1-be2d9c696826 00:34:20.537 14:02:13 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:20.796 rmmod nvme_tcp 00:34:20.796 rmmod nvme_fabrics 00:34:20.796 rmmod nvme_keyring 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1579042 ']' 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1579042 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1579042 ']' 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1579042 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:20.796 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1579042 00:34:21.055 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:21.055 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:21.055 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1579042' 00:34:21.055 killing process with pid 1579042 00:34:21.055 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1579042 00:34:21.055 14:02:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1579042 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.592 14:02:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.499 14:02:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:25.499 00:34:25.499 real 1m39.506s 00:34:25.499 user 5m51.841s 00:34:25.499 sys 0m20.677s 00:34:25.499 14:02:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:25.499 14:02:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:25.499 ************************************ 00:34:25.499 END TEST nvmf_perf 00:34:25.499 ************************************ 00:34:25.499 14:02:18 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:25.499 14:02:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:25.499 14:02:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:25.499 14:02:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.499 ************************************ 00:34:25.499 START TEST nvmf_fio_host 00:34:25.499 ************************************ 00:34:25.499 14:02:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:34:25.499 * Looking for test storage... 00:34:25.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:25.499 14:02:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.499 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.499 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.499 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:25.500 14:02:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:32.073 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:32.074 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:32.074 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:32.074 Found net devices under 0000:af:00.0: cvl_0_0 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:32.074 Found net devices under 0000:af:00.1: cvl_0_1 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:32.074 14:02:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.333 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.333 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.333 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:32.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:34:32.333 00:34:32.333 --- 10.0.0.2 ping statistics --- 00:34:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.333 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:34:32.333 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:34:32.333 00:34:32.333 --- 10.0.0.1 ping statistics --- 00:34:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.333 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:34:32.333 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1597435 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1597435 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1597435 ']' 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:32.334 14:02:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.334 [2024-06-11 14:02:25.157253] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:32.334 [2024-06-11 14:02:25.157316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.334 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.593 [2024-06-11 14:02:25.265166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.593 [2024-06-11 14:02:25.352792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.593 [2024-06-11 14:02:25.352833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.593 [2024-06-11 14:02:25.352846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.593 [2024-06-11 14:02:25.352858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.593 [2024-06-11 14:02:25.352868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.593 [2024-06-11 14:02:25.352921] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.593 [2024-06-11 14:02:25.353015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.593 [2024-06-11 14:02:25.353124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.593 [2024-06-11 14:02:25.353125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:33.530 [2024-06-11 14:02:26.283642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.530 14:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:34:33.789 Malloc1 00:34:33.789 14:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:34.048 14:02:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:34.307 14:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.566 [2024-06-11 14:02:27.303215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.566 14:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:34.825 14:02:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:35.084 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:35.084 fio-3.35 00:34:35.084 Starting 1 thread 00:34:35.084 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.618 00:34:37.618 test: (groupid=0, jobs=1): err= 0: pid=1598005: Tue Jun 11 14:02:30 2024 00:34:37.618 read: IOPS=9111, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec) 00:34:37.618 slat (usec): min=2, max=255, avg= 2.28, stdev= 2.60 00:34:37.618 clat (usec): min=2853, max=13437, avg=7755.93, stdev=579.14 00:34:37.619 lat (usec): min=2883, max=13439, avg=7758.21, stdev=578.91 00:34:37.619 clat percentiles (usec): 00:34:37.619 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:34:37.619 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:34:37.619 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8586], 00:34:37.619 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[12256], 99.95th=[12518], 00:34:37.619 | 99.99th=[12780] 00:34:37.619 bw ( KiB/s): min=35584, max=36992, per=99.91%, avg=36414.00, stdev=612.49, samples=4 00:34:37.619 iops : min= 8896, max= 9248, avg=9103.50, stdev=153.12, samples=4 00:34:37.619 write: IOPS=9123, BW=35.6MiB/s (37.4MB/s)(71.5MiB/2006msec); 0 zone resets 00:34:37.619 slat (usec): min=2, max=226, avg= 2.39, stdev= 1.85 00:34:37.619 clat (usec): min=2465, max=11744, avg=6230.69, stdev=493.37 00:34:37.619 lat (usec): min=2480, max=11747, avg=6233.08, stdev=493.20 00:34:37.619 clat percentiles (usec): 00:34:37.619 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:34:37.619 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:34:37.619 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6980], 00:34:37.619 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9896], 99.95th=[10421], 00:34:37.619 | 99.99th=[11731] 00:34:37.619 bw ( KiB/s): min=36392, max=36672, per=99.99%, avg=36490.00, stdev=126.89, samples=4 00:34:37.619 iops : min= 9098, max= 9168, avg=9122.50, stdev=31.72, samples=4 00:34:37.619 lat (msec) : 4=0.10%, 10=99.78%, 20=0.11% 00:34:37.619 cpu : usr=64.84%, sys=30.07%, ctx=67, majf=0, minf=4 00:34:37.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:37.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:37.619 issued rwts: total=18278,18301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:37.619 00:34:37.619 Run status group 0 (all jobs): 00:34:37.619 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.9MB), run=2006-2006msec 00:34:37.619 WRITE: bw=35.6MiB/s (37.4MB/s), 35.6MiB/s-35.6MiB/s (37.4MB/s-37.4MB/s), io=71.5MiB (75.0MB), run=2006-2006msec 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:37.619 14:02:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:34:37.887 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:34:37.887 fio-3.35 00:34:37.887 Starting 1 thread 00:34:37.887 EAL: No free 2048 kB hugepages reported on node 1 00:34:40.522 00:34:40.522 test: (groupid=0, jobs=1): err= 0: pid=1598529: Tue Jun 11 14:02:32 2024 00:34:40.522 read: IOPS=9015, BW=141MiB/s (148MB/s)(283MiB/2008msec) 00:34:40.522 slat (usec): min=3, max=111, avg= 3.89, stdev= 1.66 00:34:40.522 clat (usec): min=1564, max=16614, avg=8682.49, stdev=2168.90 00:34:40.522 lat (usec): min=1567, max=16618, avg=8686.37, stdev=2169.11 00:34:40.522 clat percentiles (usec): 00:34:40.522 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6718], 00:34:40.522 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:34:40.522 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11600], 95.00th=[12387], 00:34:40.522 | 99.00th=[14222], 99.50th=[15008], 99.90th=[16057], 99.95th=[16319], 00:34:40.522 | 99.99th=[16581] 00:34:40.522 bw ( KiB/s): min=54144, max=91232, per=49.20%, avg=70976.00, stdev=15386.67, samples=4 00:34:40.522 iops : min= 3384, max= 5702, avg=4436.00, stdev=961.67, samples=4 00:34:40.522 write: IOPS=5292, BW=82.7MiB/s (86.7MB/s)(144MiB/1746msec); 0 zone resets 00:34:40.522 slat (usec): min=40, max=381, avg=41.99, stdev= 7.42 00:34:40.522 clat (usec): min=2824, max=19157, avg=9934.75, stdev=1879.48 00:34:40.522 lat (usec): min=2865, max=19198, avg=9976.74, stdev=1880.80 00:34:40.522 clat percentiles (usec): 00:34:40.522 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8356], 00:34:40.522 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:34:40.522 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12125], 95.00th=[13566], 00:34:40.522 | 99.00th=[15795], 99.50th=[16057], 99.90th=[18220], 99.95th=[18744], 00:34:40.522 | 99.99th=[19268] 00:34:40.522 bw ( KiB/s): min=57824, max=93568, per=87.30%, avg=73920.00, stdev=14867.27, samples=4 00:34:40.522 iops : min= 3614, max= 5848, avg=4620.00, stdev=929.20, samples=4 00:34:40.522 lat (msec) : 2=0.01%, 4=0.39%, 10=66.34%, 20=33.26% 00:34:40.522 cpu : usr=88.14%, sys=10.91%, ctx=13, majf=0, minf=1 00:34:40.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:34:40.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:40.522 issued rwts: total=18103,9240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:40.522 00:34:40.522 Run status group 0 (all jobs): 00:34:40.522 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (297MB), run=2008-2008msec 00:34:40.522 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=144MiB (151MB), run=1746-1746msec 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:34:40.522 14:02:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 10.0.0.2 00:34:43.816 Nvme0n1 00:34:43.816 14:02:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:34:49.093 14:02:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=80cd0a46-2461-455d-83c4-dcf80c0c0ec8 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 80cd0a46-2461-455d-83c4-dcf80c0c0ec8 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=80cd0a46-2461-455d-83c4-dcf80c0c0ec8 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:34:49.094 { 00:34:49.094 "uuid": "80cd0a46-2461-455d-83c4-dcf80c0c0ec8", 00:34:49.094 "name": "lvs_0", 00:34:49.094 "base_bdev": "Nvme0n1", 00:34:49.094 "total_data_clusters": 1489, 00:34:49.094 "free_clusters": 1489, 00:34:49.094 "block_size": 512, 00:34:49.094 "cluster_size": 1073741824 00:34:49.094 } 00:34:49.094 ]' 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="80cd0a46-2461-455d-83c4-dcf80c0c0ec8") .free_clusters' 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1489 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="80cd0a46-2461-455d-83c4-dcf80c0c0ec8") .cluster_size' 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1524736 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1524736 00:34:49.094 1524736 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:34:49.094 9c8791f2-bb82-42a8-a4f4-72d214b5df95 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:34:49.094 14:02:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:34:49.353 14:02:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:49.612 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:49.893 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:49.893 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:49.893 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:49.893 14:02:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:50.161 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:50.161 fio-3.35 00:34:50.161 Starting 1 thread 00:34:50.161 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.687 00:34:52.687 test: (groupid=0, jobs=1): err= 0: pid=1600805: Tue Jun 11 14:02:45 2024 00:34:52.687 read: IOPS=7976, BW=31.2MiB/s (32.7MB/s)(62.5MiB/2006msec) 00:34:52.687 slat (usec): min=2, max=116, avg= 2.39, stdev= 1.30 00:34:52.687 clat (usec): min=431, max=270420, avg=8656.41, stdev=15859.73 00:34:52.687 lat (usec): min=433, max=270425, avg=8658.80, stdev=15859.79 00:34:52.687 clat percentiles (msec): 00:34:52.687 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:34:52.687 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:34:52.687 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:34:52.687 | 99.00th=[ 10], 99.50th=[ 10], 99.90th=[ 271], 99.95th=[ 271], 00:34:52.687 | 99.99th=[ 271] 00:34:52.687 bw ( KiB/s): min=16184, max=37232, per=99.88%, avg=31866.00, stdev=10456.03, samples=4 00:34:52.687 iops : min= 4046, max= 9308, avg=7966.50, stdev=2614.01, samples=4 00:34:52.687 write: IOPS=7955, BW=31.1MiB/s (32.6MB/s)(62.3MiB/2006msec); 0 zone resets 00:34:52.687 slat (nsec): min=2264, max=102192, avg=2527.20, stdev=909.62 00:34:52.687 clat (usec): min=287, max=268909, avg=7272.17, stdev=16944.72 00:34:52.687 lat (usec): min=289, max=268916, avg=7274.70, stdev=16944.86 00:34:52.687 clat percentiles (msec): 00:34:52.687 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:34:52.687 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:34:52.687 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:34:52.687 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 271], 99.95th=[ 271], 00:34:52.687 | 99.99th=[ 271] 00:34:52.687 bw ( KiB/s): min=17232, max=36928, per=99.92%, avg=31796.00, stdev=9711.63, samples=4 00:34:52.687 iops : min= 4308, max= 9232, avg=7949.00, stdev=2427.91, samples=4 00:34:52.687 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:34:52.687 lat (msec) : 2=0.09%, 4=0.26%, 10=99.10%, 20=0.11%, 500=0.40% 00:34:52.687 cpu : usr=67.58%, sys=28.38%, ctx=60, majf=0, minf=4 00:34:52.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:34:52.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:52.687 issued rwts: total=16000,15958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:52.687 00:34:52.687 Run status group 0 (all jobs): 00:34:52.687 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.5MiB (65.5MB), run=2006-2006msec 00:34:52.687 WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.3MiB (65.4MB), run=2006-2006msec 00:34:52.687 14:02:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:52.945 14:02:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:34:53.877 14:02:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=0709cbdc-1db9-42e8-930b-7f3cef68dbcd 00:34:53.877 14:02:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 0709cbdc-1db9-42e8-930b-7f3cef68dbcd 00:34:53.877 14:02:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=0709cbdc-1db9-42e8-930b-7f3cef68dbcd 00:34:53.877 14:02:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:34:53.877 14:02:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:34:53.878 14:02:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:34:53.878 14:02:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:54.135 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:34:54.135 { 00:34:54.135 "uuid": "80cd0a46-2461-455d-83c4-dcf80c0c0ec8", 00:34:54.135 "name": "lvs_0", 00:34:54.135 "base_bdev": "Nvme0n1", 00:34:54.135 "total_data_clusters": 1489, 00:34:54.135 "free_clusters": 0, 00:34:54.135 "block_size": 512, 00:34:54.135 "cluster_size": 1073741824 00:34:54.135 }, 00:34:54.135 { 00:34:54.135 "uuid": "0709cbdc-1db9-42e8-930b-7f3cef68dbcd", 00:34:54.135 "name": "lvs_n_0", 00:34:54.135 "base_bdev": "9c8791f2-bb82-42a8-a4f4-72d214b5df95", 00:34:54.135 "total_data_clusters": 380811, 00:34:54.135 "free_clusters": 380811, 00:34:54.135 "block_size": 512, 00:34:54.135 "cluster_size": 4194304 00:34:54.135 } 00:34:54.135 ]' 00:34:54.135 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="0709cbdc-1db9-42e8-930b-7f3cef68dbcd") .free_clusters' 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=380811 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0709cbdc-1db9-42e8-930b-7f3cef68dbcd") .cluster_size' 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1523244 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1523244 00:34:54.393 1523244 00:34:54.393 14:02:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:34:55.327 adbf1d99-297f-4961-b1a8-950aafe5fa16 00:34:55.327 14:02:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:34:55.584 14:02:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:34:55.843 14:02:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:34:56.101 14:02:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:56.101 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:34:56.102 14:02:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:34:56.361 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:56.361 fio-3.35 00:34:56.361 Starting 1 thread 00:34:56.361 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.895 00:34:58.895 test: (groupid=0, jobs=1): err= 0: pid=1601925: Tue Jun 11 14:02:51 2024 00:34:58.895 read: IOPS=5971, BW=23.3MiB/s (24.5MB/s)(46.9MiB/2009msec) 00:34:58.895 slat (usec): min=2, max=127, avg= 2.47, stdev= 2.92 00:34:58.895 clat (usec): min=3833, max=19208, avg=11796.50, stdev=1063.28 00:34:58.895 lat (usec): min=3853, max=19210, avg=11798.97, stdev=1063.16 00:34:58.895 clat percentiles (usec): 00:34:58.895 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:34:58.895 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:34:58.895 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:34:58.895 | 99.00th=[14353], 99.50th=[15139], 99.90th=[17695], 99.95th=[18220], 00:34:58.895 | 99.99th=[18220] 00:34:58.895 bw ( KiB/s): min=22848, max=24360, per=99.81%, avg=23838.00, stdev=685.15, samples=4 00:34:58.895 iops : min= 5712, max= 6090, avg=5959.50, stdev=171.29, samples=4 00:34:58.895 write: IOPS=5955, BW=23.3MiB/s (24.4MB/s)(46.7MiB/2009msec); 0 zone resets 00:34:58.895 slat (usec): min=2, max=480, avg= 2.59, stdev= 5.08 00:34:58.895 clat (usec): min=1891, max=18050, avg=9511.38, stdev=927.87 00:34:58.895 lat (usec): min=1898, max=18053, avg=9513.97, stdev=927.65 00:34:58.895 clat percentiles (usec): 00:34:58.895 | 1.00th=[ 7504], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:34:58.895 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:34:58.895 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:34:58.895 | 99.00th=[11863], 99.50th=[13042], 99.90th=[16581], 99.95th=[16909], 00:34:58.895 | 99.99th=[17957] 00:34:58.895 bw ( KiB/s): min=23672, max=24008, per=100.00%, avg=23828.00, stdev=173.00, samples=4 00:34:58.895 iops : min= 5918, max= 6002, avg=5957.00, stdev=43.25, samples=4 00:34:58.895 lat (msec) : 2=0.01%, 4=0.08%, 10=38.87%, 20=61.04% 00:34:58.895 cpu : usr=63.00%, sys=31.62%, ctx=195, majf=0, minf=4 00:34:58.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:34:58.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:58.895 issued rwts: total=11996,11964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:58.895 00:34:58.895 Run status group 0 (all jobs): 00:34:58.895 READ: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.1MB), run=2009-2009msec 00:34:58.895 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (49.0MB), run=2009-2009msec 00:34:58.895 14:02:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:34:59.154 14:02:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:34:59.154 14:02:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:35:05.727 14:02:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:35:05.727 14:02:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:35:09.955 14:03:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:35:10.213 14:03:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:12.743 rmmod nvme_tcp 00:35:12.743 rmmod nvme_fabrics 00:35:12.743 rmmod nvme_keyring 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1597435 ']' 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1597435 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1597435 ']' 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1597435 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1597435 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1597435' 00:35:12.743 killing process with pid 1597435 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1597435 00:35:12.743 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1597435 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:13.002 14:03:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.537 14:03:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:15.537 00:35:15.537 real 0m49.863s 00:35:15.537 user 3m36.161s 00:35:15.537 sys 0m10.797s 00:35:15.537 14:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:15.538 14:03:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.538 ************************************ 00:35:15.538 END TEST nvmf_fio_host 00:35:15.538 ************************************ 00:35:15.538 14:03:07 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:15.538 14:03:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:15.538 14:03:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:15.538 14:03:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.538 ************************************ 00:35:15.538 START TEST nvmf_failover 00:35:15.538 ************************************ 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:15.538 * Looking for test storage... 00:35:15.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:35:15.538 14:03:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:22.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:22.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:22.106 Found net devices under 0000:af:00.0: cvl_0_0 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:22.106 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:22.107 Found net devices under 0000:af:00.1: cvl_0_1 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:22.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:35:22.107 00:35:22.107 --- 10.0.0.2 ping statistics --- 00:35:22.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.107 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:35:22.107 00:35:22.107 --- 10.0.0.1 ping statistics --- 00:35:22.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.107 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:22.107 14:03:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:22.107 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1608654 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1608654 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1608654 ']' 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:22.366 14:03:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:22.366 [2024-06-11 14:03:15.098177] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:22.366 [2024-06-11 14:03:15.098238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.366 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.366 [2024-06-11 14:03:15.196187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:22.626 [2024-06-11 14:03:15.281760] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.626 [2024-06-11 14:03:15.281799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.626 [2024-06-11 14:03:15.281813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.626 [2024-06-11 14:03:15.281824] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.626 [2024-06-11 14:03:15.281835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.626 [2024-06-11 14:03:15.281884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.626 [2024-06-11 14:03:15.281992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:22.626 [2024-06-11 14:03:15.281993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.194 14:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:23.453 [2024-06-11 14:03:16.268110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.453 14:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:23.713 Malloc0 00:35:23.713 14:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:23.972 14:03:16 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:24.231 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.490 [2024-06-11 14:03:17.195595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.490 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:24.749 [2024-06-11 14:03:17.432332] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:24.749 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:25.009 [2024-06-11 14:03:17.669115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1609195 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1609195 /var/tmp/bdevperf.sock 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1609195 ']' 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:25.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:25.009 14:03:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:25.945 14:03:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:25.945 14:03:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:25.945 14:03:18 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:26.204 NVMe0n1 00:35:26.204 14:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:26.463 00:35:26.722 14:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1609474 00:35:26.722 14:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:26.722 14:03:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:35:27.658 14:03:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.917 [2024-06-11 14:03:20.603233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603324] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.917 [2024-06-11 14:03:20.603423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603521] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 [2024-06-11 14:03:20.603547] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490990 is same with the state(5) to be set 00:35:27.918 14:03:20 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:31.204 14:03:23 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:31.204 00:35:31.204 14:03:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:31.463 [2024-06-11 14:03:24.291508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491f40 is same with the state(5) to be set 00:35:31.463 [2024-06-11 14:03:24.291561] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491f40 is same with the state(5) to be set 00:35:31.463 [2024-06-11 14:03:24.291575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491f40 is same with the state(5) to be set 00:35:31.463 [2024-06-11 14:03:24.291587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491f40 is same with the state(5) to be set 00:35:31.463 [2024-06-11 14:03:24.291600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2491f40 is same with the state(5) to be set 00:35:31.463 14:03:24 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:34.797 14:03:27 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.797 [2024-06-11 14:03:27.528990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.797 14:03:27 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:35.733 14:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:35.991 [2024-06-11 14:03:28.775417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492640 is same with the state(5) to be set 00:35:35.991 14:03:28 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1609474 00:35:42.565 0 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1609195 ']' 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1609195' 00:35:42.565 killing process with pid 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1609195 00:35:42.565 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:42.565 [2024-06-11 14:03:17.747083] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:42.565 [2024-06-11 14:03:17.747149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609195 ] 00:35:42.565 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.565 [2024-06-11 14:03:17.852598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.565 [2024-06-11 14:03:17.935963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.565 Running I/O for 15 seconds... 00:35:42.565 [2024-06-11 14:03:20.603789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.565 [2024-06-11 14:03:20.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.603850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.565 [2024-06-11 14:03:20.603864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.603877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.565 [2024-06-11 14:03:20.603890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.603904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.565 [2024-06-11 14:03:20.603916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.603929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9bfa0 is same with the state(5) to be set 00:35:42.565 [2024-06-11 14:03:20.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.565 [2024-06-11 14:03:20.604978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.565 [2024-06-11 14:03:20.604993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.605972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.605987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.606015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.606028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.606043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.606055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.606070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.566 [2024-06-11 14:03:20.606098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.566 [2024-06-11 14:03:20.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.567 [2024-06-11 14:03:20.606925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.606952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.606980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.606995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.567 [2024-06-11 14:03:20.607160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.567 [2024-06-11 14:03:20.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.568 [2024-06-11 14:03:20.607399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.607974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.607986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.568 [2024-06-11 14:03:20.608242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.568 [2024-06-11 14:03:20.608257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:20.608270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:20.608297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.569 [2024-06-11 14:03:20.608309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.569 [2024-06-11 14:03:20.608320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:35:42.569 [2024-06-11 14:03:20.608332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:20.608386] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dc22f0 was disconnected and freed. reset controller. 00:35:42.569 [2024-06-11 14:03:20.608402] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:42.569 [2024-06-11 14:03:20.608416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.569 [2024-06-11 14:03:20.612157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.569 [2024-06-11 14:03:20.612193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9bfa0 (9): Bad file descriptor 00:35:42.569 [2024-06-11 14:03:20.815310] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:42.569 [2024-06-11 14:03:24.292911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.292956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.292980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.293001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.293030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.293060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.293088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.569 [2024-06-11 14:03:24.293118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.569 [2024-06-11 14:03:24.293829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.569 [2024-06-11 14:03:24.293844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.293871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.293899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.293955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.293982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.293996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.570 [2024-06-11 14:03:24.294244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.570 [2024-06-11 14:03:24.294271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.570 [2024-06-11 14:03:24.294818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.570 [2024-06-11 14:03:24.294832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.294998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.571 [2024-06-11 14:03:24.295445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.571 [2024-06-11 14:03:24.295939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.571 [2024-06-11 14:03:24.295952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.295980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.295995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:24.296284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.572 [2024-06-11 14:03:24.296509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.572 [2024-06-11 14:03:24.296552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.572 [2024-06-11 14:03:24.296564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125928 len:8 PRP1 0x0 PRP2 0x0 00:35:42.572 [2024-06-11 14:03:24.296576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296629] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f66c60 was disconnected and freed. reset controller. 00:35:42.572 [2024-06-11 14:03:24.296645] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:42.572 [2024-06-11 14:03:24.296673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.572 [2024-06-11 14:03:24.296687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.572 [2024-06-11 14:03:24.296714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.572 [2024-06-11 14:03:24.296740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.572 [2024-06-11 14:03:24.296766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:24.296779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.572 [2024-06-11 14:03:24.296808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9bfa0 (9): Bad file descriptor 00:35:42.572 [2024-06-11 14:03:24.300527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.572 [2024-06-11 14:03:24.338636] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:42.572 [2024-06-11 14:03:28.778489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.572 [2024-06-11 14:03:28.778842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.572 [2024-06-11 14:03:28.778857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.778869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.778885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.778897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.778912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.778924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.778939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.778952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.778967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.778980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.778994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.573 [2024-06-11 14:03:28.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.573 [2024-06-11 14:03:28.779984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.779998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:42.574 [2024-06-11 14:03:28.780311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47024 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47032 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47040 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47048 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47056 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47064 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47072 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47080 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47088 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.780953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.780963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.780976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47096 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.780989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.781001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.781011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.781022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47104 len:8 PRP1 0x0 PRP2 0x0 00:35:42.574 [2024-06-11 14:03:28.781034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.574 [2024-06-11 14:03:28.781047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.574 [2024-06-11 14:03:28.781057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.574 [2024-06-11 14:03:28.781067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47112 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47120 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47128 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47136 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47144 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47152 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47160 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47168 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47176 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47184 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47192 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47200 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47208 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47216 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47224 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47232 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47240 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47248 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47256 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.781953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47264 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.781966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.781979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.781989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.782000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47272 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.782013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.782026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.782036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.782047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47280 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.782061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.782074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.782084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.782095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47288 len:8 PRP1 0x0 PRP2 0x0 00:35:42.575 [2024-06-11 14:03:28.782108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.575 [2024-06-11 14:03:28.782120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.575 [2024-06-11 14:03:28.782131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.575 [2024-06-11 14:03:28.782141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47296 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47304 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47312 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47328 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47336 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47344 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47352 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47368 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47384 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46376 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46384 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46392 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46400 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46408 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46416 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.782959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.782972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.782982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.782993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46424 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.783005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.783018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.783028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.783039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46432 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.783065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.783075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46440 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.783099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.783112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.793314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.793335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46448 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.793355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.793374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.793389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.793405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46456 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.793422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.793440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.576 [2024-06-11 14:03:28.793454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.576 [2024-06-11 14:03:28.793469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46464 len:8 PRP1 0x0 PRP2 0x0 00:35:42.576 [2024-06-11 14:03:28.793494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.576 [2024-06-11 14:03:28.793512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46472 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46480 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46488 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46496 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46504 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46512 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.793948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.793966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.793979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.793994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46520 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46528 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46536 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46544 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46552 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46560 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46568 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46576 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46584 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46592 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46600 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46608 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46616 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46624 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46632 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.794949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.794964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46640 len:8 PRP1 0x0 PRP2 0x0 00:35:42.577 [2024-06-11 14:03:28.794981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.577 [2024-06-11 14:03:28.794999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.577 [2024-06-11 14:03:28.795013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.577 [2024-06-11 14:03:28.795028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46648 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46656 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46664 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46672 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46680 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46688 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46696 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46704 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46712 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46720 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46728 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46736 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.795937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.795955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.795969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.795983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46744 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46752 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46760 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46768 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46776 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46784 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46792 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46800 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46808 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46816 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46824 len:8 PRP1 0x0 PRP2 0x0 00:35:42.578 [2024-06-11 14:03:28.796658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.578 [2024-06-11 14:03:28.796676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.578 [2024-06-11 14:03:28.796691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.578 [2024-06-11 14:03:28.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46832 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.796723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.796740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.796754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.796769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46840 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.796786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.796804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.796818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.796833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46848 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.796850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.796868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.796882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.796897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46856 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.796914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.796932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.796945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.796960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46864 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.796977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.796995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46872 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46880 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46888 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46896 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46904 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46912 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46920 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46928 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46936 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46944 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46952 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46960 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46968 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46976 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.579 [2024-06-11 14:03:28.797930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46984 len:8 PRP1 0x0 PRP2 0x0 00:35:42.579 [2024-06-11 14:03:28.797947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.579 [2024-06-11 14:03:28.797965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.579 [2024-06-11 14:03:28.797980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.580 [2024-06-11 14:03:28.797995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46992 len:8 PRP1 0x0 PRP2 0x0 00:35:42.580 [2024-06-11 14:03:28.798012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.580 [2024-06-11 14:03:28.798046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.580 [2024-06-11 14:03:28.798061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47000 len:8 PRP1 0x0 PRP2 0x0 00:35:42.580 [2024-06-11 14:03:28.798078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.580 [2024-06-11 14:03:28.798110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.580 [2024-06-11 14:03:28.798125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47008 len:8 PRP1 0x0 PRP2 0x0 00:35:42.580 [2024-06-11 14:03:28.798142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:42.580 [2024-06-11 14:03:28.798174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:42.580 [2024-06-11 14:03:28.798189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 PRP1 0x0 PRP2 0x0 00:35:42.580 [2024-06-11 14:03:28.798206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798269] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f66ac0 was disconnected and freed. reset controller. 00:35:42.580 [2024-06-11 14:03:28.798291] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:42.580 [2024-06-11 14:03:28.798331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.580 [2024-06-11 14:03:28.798351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.580 [2024-06-11 14:03:28.798388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.580 [2024-06-11 14:03:28.798426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:42.580 [2024-06-11 14:03:28.798462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:42.580 [2024-06-11 14:03:28.798487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:42.580 [2024-06-11 14:03:28.798525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9bfa0 (9): Bad file descriptor 00:35:42.580 [2024-06-11 14:03:28.803687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:42.580 [2024-06-11 14:03:28.972457] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:42.580 00:35:42.580 Latency(us) 00:35:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.580 Verification LBA range: start 0x0 length 0x4000 00:35:42.580 NVMe0n1 : 15.01 8472.61 33.10 943.76 0.00 13563.72 835.58 27682.41 00:35:42.580 =================================================================================================================== 00:35:42.580 Total : 8472.61 33.10 943.76 0.00 13563.72 835.58 27682.41 00:35:42.580 Received shutdown signal, test time was about 15.000000 seconds 00:35:42.580 00:35:42.580 Latency(us) 00:35:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.580 =================================================================================================================== 00:35:42.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1611881 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1611881 /var/tmp/bdevperf.sock 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1611881 ']' 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:42.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:42.580 14:03:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:43.146 14:03:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:43.146 14:03:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:43.146 14:03:35 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:43.146 [2024-06-11 14:03:35.991672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:43.146 14:03:36 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:43.404 [2024-06-11 14:03:36.220387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:43.404 14:03:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:43.662 NVMe0n1 00:35:43.662 14:03:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:44.229 00:35:44.229 14:03:36 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:44.487 00:35:44.487 14:03:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:44.487 14:03:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:44.745 14:03:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:45.004 14:03:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:35:48.289 14:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:48.289 14:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:35:48.289 14:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:48.289 14:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1612945 00:35:48.289 14:03:40 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1612945 00:35:49.222 0 00:35:49.222 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:49.222 [2024-06-11 14:03:34.871319] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:49.222 [2024-06-11 14:03:34.871387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611881 ] 00:35:49.222 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.222 [2024-06-11 14:03:34.975141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.222 [2024-06-11 14:03:35.051132] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.222 [2024-06-11 14:03:37.671496] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:49.222 [2024-06-11 14:03:37.671553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.222 [2024-06-11 14:03:37.671571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.222 [2024-06-11 14:03:37.671586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.222 [2024-06-11 14:03:37.671599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.222 [2024-06-11 14:03:37.671612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.222 [2024-06-11 14:03:37.671625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.222 [2024-06-11 14:03:37.671639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:49.222 [2024-06-11 14:03:37.671652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:49.223 [2024-06-11 14:03:37.671665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:49.223 [2024-06-11 14:03:37.671699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:49.223 [2024-06-11 14:03:37.671721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc0fa0 (9): Bad file descriptor 00:35:49.223 [2024-06-11 14:03:37.722034] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:49.223 Running I/O for 1 seconds... 00:35:49.223 00:35:49.223 Latency(us) 00:35:49.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.223 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:49.223 Verification LBA range: start 0x0 length 0x4000 00:35:49.223 NVMe0n1 : 1.01 8425.64 32.91 0.00 0.00 15123.03 2608.33 12268.34 00:35:49.223 =================================================================================================================== 00:35:49.223 Total : 8425.64 32.91 0.00 0.00 15123.03 2608.33 12268.34 00:35:49.223 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:49.223 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:35:49.479 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:49.737 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:49.737 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:35:49.995 14:03:42 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:50.253 14:03:43 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1611881 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1611881 ']' 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1611881 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1611881 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1611881' 00:35:53.536 killing process with pid 1611881 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1611881 00:35:53.536 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1611881 00:35:53.794 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:35:53.794 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:54.053 rmmod nvme_tcp 00:35:54.053 rmmod nvme_fabrics 00:35:54.053 rmmod nvme_keyring 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1608654 ']' 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1608654 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1608654 ']' 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1608654 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1608654 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1608654' 00:35:54.053 killing process with pid 1608654 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1608654 00:35:54.053 14:03:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1608654 00:35:54.311 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:54.311 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:54.311 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:54.311 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:54.311 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:54.312 14:03:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.312 14:03:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:54.312 14:03:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.844 14:03:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:56.844 00:35:56.844 real 0m41.180s 00:35:56.844 user 2m9.078s 00:35:56.844 sys 0m10.197s 00:35:56.844 14:03:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:56.844 14:03:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:56.844 ************************************ 00:35:56.844 END TEST nvmf_failover 00:35:56.844 ************************************ 00:35:56.844 14:03:49 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:56.844 14:03:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:56.844 14:03:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:56.844 14:03:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:56.844 ************************************ 00:35:56.844 START TEST nvmf_host_discovery 00:35:56.844 ************************************ 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:35:56.844 * Looking for test storage... 00:35:56.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:35:56.844 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:35:56.845 14:03:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:03.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:03.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:03.467 Found net devices under 0000:af:00.0: cvl_0_0 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:03.467 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:03.468 Found net devices under 0000:af:00.1: cvl_0_1 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:03.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:36:03.468 00:36:03.468 --- 10.0.0.2 ping statistics --- 00:36:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.468 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:36:03.468 00:36:03.468 --- 10.0.0.1 ping statistics --- 00:36:03.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.468 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:03.468 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1617477 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1617477 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1617477 ']' 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:03.729 14:03:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:03.729 [2024-06-11 14:03:56.428515] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:36:03.729 [2024-06-11 14:03:56.428575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.729 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.729 [2024-06-11 14:03:56.525566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.729 [2024-06-11 14:03:56.611180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.729 [2024-06-11 14:03:56.611219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.729 [2024-06-11 14:03:56.611232] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.729 [2024-06-11 14:03:56.611244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.729 [2024-06-11 14:03:56.611254] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.729 [2024-06-11 14:03:56.611281] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 [2024-06-11 14:03:57.378427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.667 [2024-06-11 14:03:57.386607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.667 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.668 null0 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.668 null1 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1617733 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1617733 /tmp/host.sock 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1617733 ']' 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:04.668 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:04.668 14:03:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:04.668 [2024-06-11 14:03:57.466434] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:36:04.668 [2024-06-11 14:03:57.466503] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1617733 ] 00:36:04.668 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.668 [2024-06-11 14:03:57.567539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.927 [2024-06-11 14:03:57.649720] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.496 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.755 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:36:05.755 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:05.756 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.015 [2024-06-11 14:03:58.686105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.015 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:36:06.016 14:03:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:36:06.584 [2024-06-11 14:03:59.428708] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:06.585 [2024-06-11 14:03:59.428737] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:06.585 [2024-06-11 14:03:59.428760] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:06.844 [2024-06-11 14:03:59.515022] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:06.844 [2024-06-11 14:03:59.740491] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:06.844 [2024-06-11 14:03:59.740517] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.102 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:07.103 14:03:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.362 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 [2024-06-11 14:04:00.230736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:07.363 [2024-06-11 14:04:00.231482] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:07.363 [2024-06-11 14:04:00.231513] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:07.363 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:07.623 [2024-06-11 14:04:00.360296] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:36:07.623 14:04:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:36:07.623 [2024-06-11 14:04:00.461017] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:07.623 [2024-06-11 14:04:00.461040] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:07.623 [2024-06-11 14:04:00.461050] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.561 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.821 [2024-06-11 14:04:01.507341] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:08.821 [2024-06-11 14:04:01.507367] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.821 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:08.821 [2024-06-11 14:04:01.511764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:08.821 [2024-06-11 14:04:01.511789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:08.821 [2024-06-11 14:04:01.511805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:08.821 [2024-06-11 14:04:01.511818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:08.821 [2024-06-11 14:04:01.511833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:08.822 id:0 cdw10:00000000 cdw11:00000000 00:36:08.822 [2024-06-11 14:04:01.511849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:08.822 [2024-06-11 14:04:01.511863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:08.822 [2024-06-11 14:04:01.511876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:08.822 [2024-06-11 14:04:01.511889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:08.822 [2024-06-11 14:04:01.521777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.531822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.532112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.532134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.532149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.532168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.532196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.532209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.532223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.532240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.822 [2024-06-11 14:04:01.541888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.542188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.542208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.542221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.542239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.542256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.542268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.542281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.542298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 [2024-06-11 14:04:01.551950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.552292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.552313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.552327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.552345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.552371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.552385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.552398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.552414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 [2024-06-11 14:04:01.562014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.562348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.562369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.562383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.562401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.562418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.562430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.562443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.562459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:08.822 [2024-06-11 14:04:01.572081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.572347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.572367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.572380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.572397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.572423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.572436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.572449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.572465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 [2024-06-11 14:04:01.582144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.582413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.582452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.582470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.582505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.582517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.582531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.582546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 [2024-06-11 14:04:01.592210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:08.822 [2024-06-11 14:04:01.592441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.822 [2024-06-11 14:04:01.592460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c97800 with addr=10.0.0.2, port=4420 00:36:08.822 [2024-06-11 14:04:01.592474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97800 is same with the state(5) to be set 00:36:08.822 [2024-06-11 14:04:01.592499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c97800 (9): Bad file descriptor 00:36:08.822 [2024-06-11 14:04:01.592526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:08.822 [2024-06-11 14:04:01.592539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:08.822 [2024-06-11 14:04:01.592552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:08.822 [2024-06-11 14:04:01.592576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:08.822 [2024-06-11 14:04:01.595559] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:08.822 [2024-06-11 14:04:01.595581] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:08.822 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:36:08.823 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:09.082 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.083 14:04:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.043 [2024-06-11 14:04:02.896417] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:10.043 [2024-06-11 14:04:02.896438] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:10.043 [2024-06-11 14:04:02.896455] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:10.302 [2024-06-11 14:04:02.984748] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:36:10.302 [2024-06-11 14:04:03.050725] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:10.302 [2024-06-11 14:04:03.050765] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.302 request: 00:36:10.302 { 00:36:10.302 "name": "nvme", 00:36:10.302 "trtype": "tcp", 00:36:10.302 "traddr": "10.0.0.2", 00:36:10.302 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:10.302 "adrfam": "ipv4", 00:36:10.302 "trsvcid": "8009", 00:36:10.302 "wait_for_attach": true, 00:36:10.302 "method": "bdev_nvme_start_discovery", 00:36:10.302 "req_id": 1 00:36:10.302 } 00:36:10.302 Got JSON-RPC error response 00:36:10.302 response: 00:36:10.302 { 00:36:10.302 "code": -17, 00:36:10.302 "message": "File exists" 00:36:10.302 } 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:10.302 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.303 request: 00:36:10.303 { 00:36:10.303 "name": "nvme_second", 00:36:10.303 "trtype": "tcp", 00:36:10.303 "traddr": "10.0.0.2", 00:36:10.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:10.303 "adrfam": "ipv4", 00:36:10.303 "trsvcid": "8009", 00:36:10.303 "wait_for_attach": true, 00:36:10.303 "method": "bdev_nvme_start_discovery", 00:36:10.303 "req_id": 1 00:36:10.303 } 00:36:10.303 Got JSON-RPC error response 00:36:10.303 response: 00:36:10.303 { 00:36:10.303 "code": -17, 00:36:10.303 "message": "File exists" 00:36:10.303 } 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:10.303 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.562 14:04:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:11.498 [2024-06-11 14:04:04.319602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.498 [2024-06-11 14:04:04.319638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd3c30 with addr=10.0.0.2, port=8010 00:36:11.498 [2024-06-11 14:04:04.319657] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:11.498 [2024-06-11 14:04:04.319669] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:11.498 [2024-06-11 14:04:04.319681] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:12.434 [2024-06-11 14:04:05.322039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.434 [2024-06-11 14:04:05.322070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c8f780 with addr=10.0.0.2, port=8010 00:36:12.434 [2024-06-11 14:04:05.322088] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:12.434 [2024-06-11 14:04:05.322099] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:12.434 [2024-06-11 14:04:05.322111] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:13.811 [2024-06-11 14:04:06.324105] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:36:13.811 request: 00:36:13.811 { 00:36:13.811 "name": "nvme_second", 00:36:13.811 "trtype": "tcp", 00:36:13.811 "traddr": "10.0.0.2", 00:36:13.811 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:13.811 "adrfam": "ipv4", 00:36:13.811 "trsvcid": "8010", 00:36:13.811 "attach_timeout_ms": 3000, 00:36:13.811 "method": "bdev_nvme_start_discovery", 00:36:13.811 "req_id": 1 00:36:13.811 } 00:36:13.811 Got JSON-RPC error response 00:36:13.811 response: 00:36:13.811 { 00:36:13.811 "code": -110, 00:36:13.811 "message": "Connection timed out" 00:36:13.811 } 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1617733 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:13.811 rmmod nvme_tcp 00:36:13.811 rmmod nvme_fabrics 00:36:13.811 rmmod nvme_keyring 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1617477 ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 1617477 ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1617477' 00:36:13.811 killing process with pid 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 1617477 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:13.811 14:04:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:16.347 00:36:16.347 real 0m19.505s 00:36:16.347 user 0m22.661s 00:36:16.347 sys 0m7.420s 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:16.347 ************************************ 00:36:16.347 END TEST nvmf_host_discovery 00:36:16.347 ************************************ 00:36:16.347 14:04:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:16.347 14:04:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:16.347 14:04:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:16.347 14:04:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.347 ************************************ 00:36:16.347 START TEST nvmf_host_multipath_status 00:36:16.347 ************************************ 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:16.347 * Looking for test storage... 00:36:16.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.347 14:04:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.347 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:36:16.348 14:04:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:22.917 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:22.917 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:22.917 Found net devices under 0000:af:00.0: cvl_0_0 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:22.917 Found net devices under 0000:af:00.1: cvl_0_1 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:22.917 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:22.918 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:23.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:36:23.177 00:36:23.177 --- 10.0.0.2 ping statistics --- 00:36:23.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.177 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:36:23.177 00:36:23.177 --- 10.0.0.1 ping statistics --- 00:36:23.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.177 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1622998 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1622998 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1622998 ']' 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:23.177 14:04:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:23.177 [2024-06-11 14:04:16.009560] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:36:23.177 [2024-06-11 14:04:16.009621] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.177 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.436 [2024-06-11 14:04:16.112656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:23.436 [2024-06-11 14:04:16.198863] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.436 [2024-06-11 14:04:16.198905] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.436 [2024-06-11 14:04:16.198918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.436 [2024-06-11 14:04:16.198931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.436 [2024-06-11 14:04:16.198941] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.436 [2024-06-11 14:04:16.198993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.436 [2024-06-11 14:04:16.198998] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1622998 00:36:24.372 14:04:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:24.372 [2024-06-11 14:04:17.167488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.372 14:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:24.664 Malloc0 00:36:24.664 14:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:24.922 14:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.922 14:04:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:25.180 [2024-06-11 14:04:18.030324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.180 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:25.438 [2024-06-11 14:04:18.246930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1623476 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1623476 /var/tmp/bdevperf.sock 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1623476 ']' 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:25.438 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:25.697 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:25.697 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:36:25.697 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:25.956 14:04:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:36:26.214 Nvme0n1 00:36:26.472 14:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:26.731 Nvme0n1 00:36:26.731 14:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:36:26.731 14:04:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:28.631 14:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:36:28.631 14:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:28.889 14:04:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:29.148 14:04:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.523 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:30.782 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:31.040 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:31.040 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:31.040 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:31.040 14:04:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:31.298 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:31.298 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:31.298 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:31.299 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:31.557 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:31.557 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:36:31.557 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:31.815 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:32.074 14:04:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:36:33.010 14:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:36:33.010 14:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:33.010 14:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.010 14:04:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:33.269 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:33.269 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:33.269 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.269 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:33.528 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:33.528 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:33.528 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.528 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:33.787 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:33.787 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:33.787 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:33.787 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:34.047 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:34.047 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:34.047 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:34.047 14:04:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:34.306 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:34.306 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:34.306 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:34.306 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:34.565 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:34.565 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:34.565 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:34.825 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:34.825 14:04:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.202 14:04:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:36.462 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:36.462 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:36.462 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.462 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.721 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:36.980 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:36.980 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:36.980 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:36.980 14:04:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:37.239 14:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:37.239 14:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:37.239 14:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:37.499 14:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:37.758 14:04:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:38.695 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:38.695 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:38.695 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:38.695 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:38.954 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:38.954 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:38.954 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:38.954 14:04:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:39.213 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:39.213 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:39.213 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.213 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:39.471 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:39.471 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:39.471 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.471 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:39.730 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:39.730 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:39.730 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.730 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:39.990 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:39.990 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:39.990 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:39.990 14:04:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:40.249 14:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:40.249 14:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:40.249 14:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:40.509 14:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:40.768 14:04:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:41.705 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:41.705 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:41.705 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:41.705 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:41.964 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:41.964 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:41.964 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:41.964 14:04:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:42.267 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:42.267 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:42.267 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.267 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:42.534 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.534 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:42.534 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.534 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:42.793 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:42.793 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:42.793 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:42.793 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:36:43.051 14:04:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:43.309 14:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:43.568 14:04:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:36:44.504 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:36:44.504 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:44.504 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.504 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:44.762 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:44.762 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:44.762 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:44.762 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:45.021 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.021 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:45.021 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.021 14:04:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:45.279 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.279 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:45.279 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.279 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:45.537 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:45.537 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:45.537 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.537 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:45.796 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:45.796 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:45.796 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:45.796 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:46.055 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:46.055 14:04:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:36:46.313 14:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:36:46.313 14:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:46.572 14:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:46.830 14:04:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:36:47.766 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:36:47.766 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:47.766 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.766 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:48.026 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.026 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:48.026 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.026 14:04:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:48.284 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.284 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:48.284 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.284 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:48.543 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.543 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:48.543 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:48.543 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.802 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.802 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:48.802 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.802 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:49.061 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:49.061 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:49.061 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:49.061 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:49.319 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:49.319 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:36:49.319 14:04:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:49.319 14:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:49.577 14:04:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.954 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:51.212 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.212 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:51.212 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.212 14:04:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:51.471 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.729 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:51.729 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.729 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:51.729 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.729 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:51.988 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.988 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:36:51.988 14:04:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:52.246 14:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:52.505 14:04:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:36:53.440 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:36:53.440 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:53.441 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.441 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:53.699 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:53.699 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:53.699 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.699 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:53.957 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:53.957 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:53.957 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:53.957 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.216 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.216 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:54.216 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.216 14:04:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:54.476 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.476 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:54.476 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.476 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:54.734 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.735 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:54.735 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.735 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:54.993 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.993 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:36:54.993 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:55.251 14:04:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:55.510 14:04:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:36:56.445 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:36:56.445 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:56.445 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.445 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:56.704 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:56.704 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:56.704 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.704 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:56.962 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:56.962 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:56.962 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.962 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:57.221 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.221 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:57.221 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.221 14:04:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.480 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1623476 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1623476 ']' 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1623476 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:57.739 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1623476 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1623476' 00:36:58.002 killing process with pid 1623476 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1623476 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1623476 00:36:58.002 Connection closed with partial response: 00:36:58.002 00:36:58.002 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1623476 00:36:58.002 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:58.002 [2024-06-11 14:04:18.299402] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:36:58.002 [2024-06-11 14:04:18.299471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623476 ] 00:36:58.002 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.002 [2024-06-11 14:04:18.377824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.002 [2024-06-11 14:04:18.448200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:58.002 Running I/O for 90 seconds... 00:36:58.002 [2024-06-11 14:04:33.276009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.002 [2024-06-11 14:04:33.276050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:58.002 [2024-06-11 14:04:33.276419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.002 [2024-06-11 14:04:33.276430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.276446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.276455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.276471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.276486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.277979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.277990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:58.003 [2024-06-11 14:04:33.278494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.003 [2024-06-11 14:04:33.278503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.278976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.278986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.004 [2024-06-11 14:04:33.279626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:58.004 [2024-06-11 14:04:33.279645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.279978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.279998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.005 [2024-06-11 14:04:33.280007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.005 [2024-06-11 14:04:33.280036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:33.280756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:33.280766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:48.174380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.005 [2024-06-11 14:04:48.174423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:48.174459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.005 [2024-06-11 14:04:48.174469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:48.174490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.005 [2024-06-11 14:04:48.174500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:58.005 [2024-06-11 14:04:48.174514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.174523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.174548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.174977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.174991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.175296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.175305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.176840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.176866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.176891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.176914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.176938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.176962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.176976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.176986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.177012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.006 [2024-06-11 14:04:48.177035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.177060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.177083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.177107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:58.006 [2024-06-11 14:04:48.177145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.006 [2024-06-11 14:04:48.177154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.007 [2024-06-11 14:04:48.177809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.007 [2024-06-11 14:04:48.177902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:58.007 [2024-06-11 14:04:48.177917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:58.007 [2024-06-11 14:04:48.177926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:58.007 Received shutdown signal, test time was about 30.985147 seconds 00:36:58.007 00:36:58.007 Latency(us) 00:36:58.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.007 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:58.007 Verification LBA range: start 0x0 length 0x4000 00:36:58.007 Nvme0n1 : 30.98 8427.43 32.92 0.00 0.00 15169.98 231.01 4026531.84 00:36:58.007 =================================================================================================================== 00:36:58.007 Total : 8427.43 32.92 0.00 0.00 15169.98 231.01 4026531.84 00:36:58.007 14:04:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:58.267 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:36:58.267 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:58.267 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:58.268 rmmod nvme_tcp 00:36:58.268 rmmod nvme_fabrics 00:36:58.268 rmmod nvme_keyring 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1622998 ']' 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1622998 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1622998 ']' 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1622998 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1622998 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1622998' 00:36:58.268 killing process with pid 1622998 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1622998 00:36:58.268 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1622998 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.527 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:58.528 14:04:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.102 14:04:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:01.102 00:37:01.102 real 0m44.577s 00:37:01.102 user 1m57.437s 00:37:01.102 sys 0m16.183s 00:37:01.102 14:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:01.102 14:04:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:01.102 ************************************ 00:37:01.102 END TEST nvmf_host_multipath_status 00:37:01.102 ************************************ 00:37:01.102 14:04:53 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:01.102 14:04:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:01.102 14:04:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:01.102 14:04:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:01.102 ************************************ 00:37:01.102 START TEST nvmf_discovery_remove_ifc 00:37:01.102 ************************************ 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:01.102 * Looking for test storage... 00:37:01.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:01.102 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:37:01.103 14:04:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:07.672 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:07.672 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:07.672 Found net devices under 0000:af:00.0: cvl_0_0 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:07.672 Found net devices under 0000:af:00.1: cvl_0_1 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:07.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:37:07.672 00:37:07.672 --- 10.0.0.2 ping statistics --- 00:37:07.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.672 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:07.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:37:07.672 00:37:07.672 --- 10.0.0.1 ping statistics --- 00:37:07.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.672 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:07.672 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1632679 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1632679 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1632679 ']' 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:07.930 14:05:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:07.930 [2024-06-11 14:05:00.634990] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:37:07.930 [2024-06-11 14:05:00.635049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.930 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.930 [2024-06-11 14:05:00.732118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.930 [2024-06-11 14:05:00.814387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.930 [2024-06-11 14:05:00.814435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.930 [2024-06-11 14:05:00.814449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.930 [2024-06-11 14:05:00.814462] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.930 [2024-06-11 14:05:00.814472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.930 [2024-06-11 14:05:00.814514] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:08.867 [2024-06-11 14:05:01.590628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.867 [2024-06-11 14:05:01.598831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:08.867 null0 00:37:08.867 [2024-06-11 14:05:01.630807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1632842 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1632842 /tmp/host.sock 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1632842 ']' 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:08.867 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:08.867 14:05:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:08.867 [2024-06-11 14:05:01.703816] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:37:08.867 [2024-06-11 14:05:01.703876] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632842 ] 00:37:08.867 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.126 [2024-06-11 14:05:01.807100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.126 [2024-06-11 14:05:01.889444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.695 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:09.954 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:09.954 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:37:09.954 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:09.954 14:05:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:10.890 [2024-06-11 14:05:03.649417] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:10.890 [2024-06-11 14:05:03.649448] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:10.890 [2024-06-11 14:05:03.649468] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:10.890 [2024-06-11 14:05:03.738755] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:11.148 [2024-06-11 14:05:03.842786] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:11.148 [2024-06-11 14:05:03.842845] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:11.148 [2024-06-11 14:05:03.842876] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:11.148 [2024-06-11 14:05:03.842893] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:11.148 [2024-06-11 14:05:03.842917] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:11.148 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:11.149 [2024-06-11 14:05:03.848343] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x94c5c0 was disconnected and freed. delete nvme_qpair. 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:37:11.149 14:05:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:11.149 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:11.406 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:11.406 14:05:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:12.342 14:05:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:13.278 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:13.537 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:13.537 14:05:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:14.474 14:05:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:15.412 14:05:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:16.792 [2024-06-11 14:05:09.283641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:37:16.792 [2024-06-11 14:05:09.283691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.792 [2024-06-11 14:05:09.283709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.792 [2024-06-11 14:05:09.283724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.792 [2024-06-11 14:05:09.283737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.792 [2024-06-11 14:05:09.283751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.792 [2024-06-11 14:05:09.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.792 [2024-06-11 14:05:09.283782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.792 [2024-06-11 14:05:09.283795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.792 [2024-06-11 14:05:09.283808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:16.792 [2024-06-11 14:05:09.283821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:16.792 [2024-06-11 14:05:09.283834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x912f40 is same with the state(5) to be set 00:37:16.792 [2024-06-11 14:05:09.293662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x912f40 (9): Bad file descriptor 00:37:16.792 [2024-06-11 14:05:09.303708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:16.792 14:05:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:17.729 [2024-06-11 14:05:10.332522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:37:17.729 [2024-06-11 14:05:10.332620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x912f40 with addr=10.0.0.2, port=4420 00:37:17.729 [2024-06-11 14:05:10.332664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x912f40 is same with the state(5) to be set 00:37:17.729 [2024-06-11 14:05:10.332737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x912f40 (9): Bad file descriptor 00:37:17.729 [2024-06-11 14:05:10.333650] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:17.729 [2024-06-11 14:05:10.333712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:17.729 [2024-06-11 14:05:10.333744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:17.729 [2024-06-11 14:05:10.333777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:17.729 [2024-06-11 14:05:10.333827] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.729 [2024-06-11 14:05:10.333858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:17.729 14:05:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:17.729 14:05:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:17.729 14:05:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:18.668 [2024-06-11 14:05:11.336367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.668 [2024-06-11 14:05:11.336412] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:37:18.668 [2024-06-11 14:05:11.336440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:18.668 [2024-06-11 14:05:11.336458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:18.668 [2024-06-11 14:05:11.336487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:18.668 [2024-06-11 14:05:11.336501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:18.668 [2024-06-11 14:05:11.336515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:18.668 [2024-06-11 14:05:11.336529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:18.668 [2024-06-11 14:05:11.336543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:18.668 [2024-06-11 14:05:11.336556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:18.668 [2024-06-11 14:05:11.336570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:18.668 [2024-06-11 14:05:11.336583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:18.668 [2024-06-11 14:05:11.336596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:37:18.668 [2024-06-11 14:05:11.336628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9123d0 (9): Bad file descriptor 00:37:18.668 [2024-06-11 14:05:11.337630] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:37:18.668 [2024-06-11 14:05:11.337649] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:18.668 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.927 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:18.927 14:05:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:19.864 14:05:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:20.803 [2024-06-11 14:05:13.354836] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:20.803 [2024-06-11 14:05:13.354858] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:20.803 [2024-06-11 14:05:13.354878] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:20.803 [2024-06-11 14:05:13.481301] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:20.803 14:05:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:20.803 [2024-06-11 14:05:13.698727] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:20.803 [2024-06-11 14:05:13.698771] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:20.803 [2024-06-11 14:05:13.698795] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:20.803 [2024-06-11 14:05:13.698813] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:37:20.803 [2024-06-11 14:05:13.698825] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:20.803 [2024-06-11 14:05:13.704399] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x901ee0 was disconnected and freed. delete nvme_qpair. 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:22.180 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1632842 ']' 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632842' 00:37:22.181 killing process with pid 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1632842 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:22.181 14:05:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:22.181 rmmod nvme_tcp 00:37:22.181 rmmod nvme_fabrics 00:37:22.181 rmmod nvme_keyring 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1632679 ']' 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1632679 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1632679 ']' 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1632679 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:22.181 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1632679 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1632679' 00:37:22.440 killing process with pid 1632679 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1632679 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1632679 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:22.440 14:05:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.008 14:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:25.008 00:37:25.008 real 0m23.861s 00:37:25.008 user 0m28.779s 00:37:25.008 sys 0m7.473s 00:37:25.008 14:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:25.008 14:05:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:25.008 ************************************ 00:37:25.008 END TEST nvmf_discovery_remove_ifc 00:37:25.008 ************************************ 00:37:25.008 14:05:17 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:25.008 14:05:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:25.008 14:05:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:25.008 14:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:25.008 ************************************ 00:37:25.008 START TEST nvmf_identify_kernel_target 00:37:25.008 ************************************ 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:25.008 * Looking for test storage... 00:37:25.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:37:25.008 14:05:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:31.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:31.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:31.612 Found net devices under 0000:af:00.0: cvl_0_0 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:31.612 Found net devices under 0000:af:00.1: cvl_0_1 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:31.612 14:05:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:31.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:37:31.612 00:37:31.612 --- 10.0.0.2 ping statistics --- 00:37:31.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.612 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:37:31.612 00:37:31.612 --- 10.0.0.1 ping statistics --- 00:37:31.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.612 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.612 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:31.613 14:05:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:34.910 Waiting for block devices as requested 00:37:34.910 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:34.910 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:34.910 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:34.910 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:34.910 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:35.169 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:35.169 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:35.169 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:35.430 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:35.430 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:35.430 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:35.690 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:35.690 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:35.690 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:35.949 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:35.949 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:35.949 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:36.209 14:05:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:36.209 No valid GPT data, bailing 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:36.209 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:37:36.470 00:37:36.470 Discovery Log Number of Records 2, Generation counter 2 00:37:36.470 =====Discovery Log Entry 0====== 00:37:36.470 trtype: tcp 00:37:36.470 adrfam: ipv4 00:37:36.470 subtype: current discovery subsystem 00:37:36.470 treq: not specified, sq flow control disable supported 00:37:36.470 portid: 1 00:37:36.470 trsvcid: 4420 00:37:36.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:36.470 traddr: 10.0.0.1 00:37:36.470 eflags: none 00:37:36.470 sectype: none 00:37:36.470 =====Discovery Log Entry 1====== 00:37:36.470 trtype: tcp 00:37:36.470 adrfam: ipv4 00:37:36.470 subtype: nvme subsystem 00:37:36.470 treq: not specified, sq flow control disable supported 00:37:36.470 portid: 1 00:37:36.470 trsvcid: 4420 00:37:36.470 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:36.470 traddr: 10.0.0.1 00:37:36.470 eflags: none 00:37:36.470 sectype: none 00:37:36.470 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:37:36.470 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:37:36.470 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.470 ===================================================== 00:37:36.470 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:37:36.470 ===================================================== 00:37:36.470 Controller Capabilities/Features 00:37:36.470 ================================ 00:37:36.470 Vendor ID: 0000 00:37:36.470 Subsystem Vendor ID: 0000 00:37:36.470 Serial Number: 50c0fb789db3b1cad211 00:37:36.470 Model Number: Linux 00:37:36.470 Firmware Version: 6.7.0-68 00:37:36.470 Recommended Arb Burst: 0 00:37:36.470 IEEE OUI Identifier: 00 00 00 00:37:36.470 Multi-path I/O 00:37:36.470 May have multiple subsystem ports: No 00:37:36.470 May have multiple controllers: No 00:37:36.470 Associated with SR-IOV VF: No 00:37:36.470 Max Data Transfer Size: Unlimited 00:37:36.470 Max Number of Namespaces: 0 00:37:36.470 Max Number of I/O Queues: 1024 00:37:36.470 NVMe Specification Version (VS): 1.3 00:37:36.470 NVMe Specification Version (Identify): 1.3 00:37:36.470 Maximum Queue Entries: 1024 00:37:36.470 Contiguous Queues Required: No 00:37:36.470 Arbitration Mechanisms Supported 00:37:36.470 Weighted Round Robin: Not Supported 00:37:36.470 Vendor Specific: Not Supported 00:37:36.470 Reset Timeout: 7500 ms 00:37:36.470 Doorbell Stride: 4 bytes 00:37:36.470 NVM Subsystem Reset: Not Supported 00:37:36.470 Command Sets Supported 00:37:36.470 NVM Command Set: Supported 00:37:36.470 Boot Partition: Not Supported 00:37:36.470 Memory Page Size Minimum: 4096 bytes 00:37:36.470 Memory Page Size Maximum: 4096 bytes 00:37:36.470 Persistent Memory Region: Not Supported 00:37:36.470 Optional Asynchronous Events Supported 00:37:36.470 Namespace Attribute Notices: Not Supported 00:37:36.470 Firmware Activation Notices: Not Supported 00:37:36.470 ANA Change Notices: Not Supported 00:37:36.470 PLE Aggregate Log Change Notices: Not Supported 00:37:36.470 LBA Status Info Alert Notices: Not Supported 00:37:36.470 EGE Aggregate Log Change Notices: Not Supported 00:37:36.470 Normal NVM Subsystem Shutdown event: Not Supported 00:37:36.470 Zone Descriptor Change Notices: Not Supported 00:37:36.470 Discovery Log Change Notices: Supported 00:37:36.470 Controller Attributes 00:37:36.470 128-bit Host Identifier: Not Supported 00:37:36.470 Non-Operational Permissive Mode: Not Supported 00:37:36.470 NVM Sets: Not Supported 00:37:36.470 Read Recovery Levels: Not Supported 00:37:36.470 Endurance Groups: Not Supported 00:37:36.470 Predictable Latency Mode: Not Supported 00:37:36.470 Traffic Based Keep ALive: Not Supported 00:37:36.470 Namespace Granularity: Not Supported 00:37:36.470 SQ Associations: Not Supported 00:37:36.470 UUID List: Not Supported 00:37:36.470 Multi-Domain Subsystem: Not Supported 00:37:36.470 Fixed Capacity Management: Not Supported 00:37:36.470 Variable Capacity Management: Not Supported 00:37:36.470 Delete Endurance Group: Not Supported 00:37:36.470 Delete NVM Set: Not Supported 00:37:36.470 Extended LBA Formats Supported: Not Supported 00:37:36.470 Flexible Data Placement Supported: Not Supported 00:37:36.470 00:37:36.470 Controller Memory Buffer Support 00:37:36.470 ================================ 00:37:36.470 Supported: No 00:37:36.470 00:37:36.470 Persistent Memory Region Support 00:37:36.470 ================================ 00:37:36.470 Supported: No 00:37:36.470 00:37:36.470 Admin Command Set Attributes 00:37:36.470 ============================ 00:37:36.470 Security Send/Receive: Not Supported 00:37:36.470 Format NVM: Not Supported 00:37:36.470 Firmware Activate/Download: Not Supported 00:37:36.470 Namespace Management: Not Supported 00:37:36.470 Device Self-Test: Not Supported 00:37:36.470 Directives: Not Supported 00:37:36.470 NVMe-MI: Not Supported 00:37:36.470 Virtualization Management: Not Supported 00:37:36.470 Doorbell Buffer Config: Not Supported 00:37:36.470 Get LBA Status Capability: Not Supported 00:37:36.470 Command & Feature Lockdown Capability: Not Supported 00:37:36.470 Abort Command Limit: 1 00:37:36.470 Async Event Request Limit: 1 00:37:36.470 Number of Firmware Slots: N/A 00:37:36.470 Firmware Slot 1 Read-Only: N/A 00:37:36.470 Firmware Activation Without Reset: N/A 00:37:36.470 Multiple Update Detection Support: N/A 00:37:36.470 Firmware Update Granularity: No Information Provided 00:37:36.470 Per-Namespace SMART Log: No 00:37:36.470 Asymmetric Namespace Access Log Page: Not Supported 00:37:36.470 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:37:36.470 Command Effects Log Page: Not Supported 00:37:36.470 Get Log Page Extended Data: Supported 00:37:36.470 Telemetry Log Pages: Not Supported 00:37:36.470 Persistent Event Log Pages: Not Supported 00:37:36.470 Supported Log Pages Log Page: May Support 00:37:36.470 Commands Supported & Effects Log Page: Not Supported 00:37:36.470 Feature Identifiers & Effects Log Page:May Support 00:37:36.470 NVMe-MI Commands & Effects Log Page: May Support 00:37:36.470 Data Area 4 for Telemetry Log: Not Supported 00:37:36.470 Error Log Page Entries Supported: 1 00:37:36.470 Keep Alive: Not Supported 00:37:36.470 00:37:36.470 NVM Command Set Attributes 00:37:36.470 ========================== 00:37:36.470 Submission Queue Entry Size 00:37:36.470 Max: 1 00:37:36.470 Min: 1 00:37:36.470 Completion Queue Entry Size 00:37:36.470 Max: 1 00:37:36.470 Min: 1 00:37:36.470 Number of Namespaces: 0 00:37:36.470 Compare Command: Not Supported 00:37:36.470 Write Uncorrectable Command: Not Supported 00:37:36.470 Dataset Management Command: Not Supported 00:37:36.470 Write Zeroes Command: Not Supported 00:37:36.470 Set Features Save Field: Not Supported 00:37:36.470 Reservations: Not Supported 00:37:36.470 Timestamp: Not Supported 00:37:36.470 Copy: Not Supported 00:37:36.470 Volatile Write Cache: Not Present 00:37:36.470 Atomic Write Unit (Normal): 1 00:37:36.470 Atomic Write Unit (PFail): 1 00:37:36.470 Atomic Compare & Write Unit: 1 00:37:36.470 Fused Compare & Write: Not Supported 00:37:36.470 Scatter-Gather List 00:37:36.470 SGL Command Set: Supported 00:37:36.470 SGL Keyed: Not Supported 00:37:36.470 SGL Bit Bucket Descriptor: Not Supported 00:37:36.470 SGL Metadata Pointer: Not Supported 00:37:36.470 Oversized SGL: Not Supported 00:37:36.470 SGL Metadata Address: Not Supported 00:37:36.470 SGL Offset: Supported 00:37:36.470 Transport SGL Data Block: Not Supported 00:37:36.470 Replay Protected Memory Block: Not Supported 00:37:36.470 00:37:36.470 Firmware Slot Information 00:37:36.470 ========================= 00:37:36.470 Active slot: 0 00:37:36.470 00:37:36.470 00:37:36.470 Error Log 00:37:36.470 ========= 00:37:36.470 00:37:36.470 Active Namespaces 00:37:36.470 ================= 00:37:36.470 Discovery Log Page 00:37:36.470 ================== 00:37:36.470 Generation Counter: 2 00:37:36.470 Number of Records: 2 00:37:36.470 Record Format: 0 00:37:36.470 00:37:36.470 Discovery Log Entry 0 00:37:36.470 ---------------------- 00:37:36.470 Transport Type: 3 (TCP) 00:37:36.470 Address Family: 1 (IPv4) 00:37:36.470 Subsystem Type: 3 (Current Discovery Subsystem) 00:37:36.470 Entry Flags: 00:37:36.471 Duplicate Returned Information: 0 00:37:36.471 Explicit Persistent Connection Support for Discovery: 0 00:37:36.471 Transport Requirements: 00:37:36.471 Secure Channel: Not Specified 00:37:36.471 Port ID: 1 (0x0001) 00:37:36.471 Controller ID: 65535 (0xffff) 00:37:36.471 Admin Max SQ Size: 32 00:37:36.471 Transport Service Identifier: 4420 00:37:36.471 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:37:36.471 Transport Address: 10.0.0.1 00:37:36.471 Discovery Log Entry 1 00:37:36.471 ---------------------- 00:37:36.471 Transport Type: 3 (TCP) 00:37:36.471 Address Family: 1 (IPv4) 00:37:36.471 Subsystem Type: 2 (NVM Subsystem) 00:37:36.471 Entry Flags: 00:37:36.471 Duplicate Returned Information: 0 00:37:36.471 Explicit Persistent Connection Support for Discovery: 0 00:37:36.471 Transport Requirements: 00:37:36.471 Secure Channel: Not Specified 00:37:36.471 Port ID: 1 (0x0001) 00:37:36.471 Controller ID: 65535 (0xffff) 00:37:36.471 Admin Max SQ Size: 32 00:37:36.471 Transport Service Identifier: 4420 00:37:36.471 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:37:36.471 Transport Address: 10.0.0.1 00:37:36.471 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.471 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.471 get_feature(0x01) failed 00:37:36.471 get_feature(0x02) failed 00:37:36.471 get_feature(0x04) failed 00:37:36.471 ===================================================== 00:37:36.471 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:36.471 ===================================================== 00:37:36.471 Controller Capabilities/Features 00:37:36.471 ================================ 00:37:36.471 Vendor ID: 0000 00:37:36.471 Subsystem Vendor ID: 0000 00:37:36.471 Serial Number: bee28f1ad7f475018264 00:37:36.471 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:37:36.471 Firmware Version: 6.7.0-68 00:37:36.471 Recommended Arb Burst: 6 00:37:36.471 IEEE OUI Identifier: 00 00 00 00:37:36.471 Multi-path I/O 00:37:36.471 May have multiple subsystem ports: Yes 00:37:36.471 May have multiple controllers: Yes 00:37:36.471 Associated with SR-IOV VF: No 00:37:36.471 Max Data Transfer Size: Unlimited 00:37:36.471 Max Number of Namespaces: 1024 00:37:36.471 Max Number of I/O Queues: 128 00:37:36.471 NVMe Specification Version (VS): 1.3 00:37:36.471 NVMe Specification Version (Identify): 1.3 00:37:36.471 Maximum Queue Entries: 1024 00:37:36.471 Contiguous Queues Required: No 00:37:36.471 Arbitration Mechanisms Supported 00:37:36.471 Weighted Round Robin: Not Supported 00:37:36.471 Vendor Specific: Not Supported 00:37:36.471 Reset Timeout: 7500 ms 00:37:36.471 Doorbell Stride: 4 bytes 00:37:36.471 NVM Subsystem Reset: Not Supported 00:37:36.471 Command Sets Supported 00:37:36.471 NVM Command Set: Supported 00:37:36.471 Boot Partition: Not Supported 00:37:36.471 Memory Page Size Minimum: 4096 bytes 00:37:36.471 Memory Page Size Maximum: 4096 bytes 00:37:36.471 Persistent Memory Region: Not Supported 00:37:36.471 Optional Asynchronous Events Supported 00:37:36.471 Namespace Attribute Notices: Supported 00:37:36.471 Firmware Activation Notices: Not Supported 00:37:36.471 ANA Change Notices: Supported 00:37:36.471 PLE Aggregate Log Change Notices: Not Supported 00:37:36.471 LBA Status Info Alert Notices: Not Supported 00:37:36.471 EGE Aggregate Log Change Notices: Not Supported 00:37:36.471 Normal NVM Subsystem Shutdown event: Not Supported 00:37:36.471 Zone Descriptor Change Notices: Not Supported 00:37:36.471 Discovery Log Change Notices: Not Supported 00:37:36.471 Controller Attributes 00:37:36.471 128-bit Host Identifier: Supported 00:37:36.471 Non-Operational Permissive Mode: Not Supported 00:37:36.471 NVM Sets: Not Supported 00:37:36.471 Read Recovery Levels: Not Supported 00:37:36.471 Endurance Groups: Not Supported 00:37:36.471 Predictable Latency Mode: Not Supported 00:37:36.471 Traffic Based Keep ALive: Supported 00:37:36.471 Namespace Granularity: Not Supported 00:37:36.471 SQ Associations: Not Supported 00:37:36.471 UUID List: Not Supported 00:37:36.471 Multi-Domain Subsystem: Not Supported 00:37:36.471 Fixed Capacity Management: Not Supported 00:37:36.471 Variable Capacity Management: Not Supported 00:37:36.471 Delete Endurance Group: Not Supported 00:37:36.471 Delete NVM Set: Not Supported 00:37:36.471 Extended LBA Formats Supported: Not Supported 00:37:36.471 Flexible Data Placement Supported: Not Supported 00:37:36.471 00:37:36.471 Controller Memory Buffer Support 00:37:36.471 ================================ 00:37:36.471 Supported: No 00:37:36.471 00:37:36.471 Persistent Memory Region Support 00:37:36.471 ================================ 00:37:36.471 Supported: No 00:37:36.471 00:37:36.471 Admin Command Set Attributes 00:37:36.471 ============================ 00:37:36.471 Security Send/Receive: Not Supported 00:37:36.471 Format NVM: Not Supported 00:37:36.471 Firmware Activate/Download: Not Supported 00:37:36.471 Namespace Management: Not Supported 00:37:36.471 Device Self-Test: Not Supported 00:37:36.471 Directives: Not Supported 00:37:36.471 NVMe-MI: Not Supported 00:37:36.471 Virtualization Management: Not Supported 00:37:36.471 Doorbell Buffer Config: Not Supported 00:37:36.471 Get LBA Status Capability: Not Supported 00:37:36.471 Command & Feature Lockdown Capability: Not Supported 00:37:36.471 Abort Command Limit: 4 00:37:36.471 Async Event Request Limit: 4 00:37:36.471 Number of Firmware Slots: N/A 00:37:36.471 Firmware Slot 1 Read-Only: N/A 00:37:36.471 Firmware Activation Without Reset: N/A 00:37:36.471 Multiple Update Detection Support: N/A 00:37:36.471 Firmware Update Granularity: No Information Provided 00:37:36.471 Per-Namespace SMART Log: Yes 00:37:36.471 Asymmetric Namespace Access Log Page: Supported 00:37:36.471 ANA Transition Time : 10 sec 00:37:36.471 00:37:36.471 Asymmetric Namespace Access Capabilities 00:37:36.471 ANA Optimized State : Supported 00:37:36.471 ANA Non-Optimized State : Supported 00:37:36.471 ANA Inaccessible State : Supported 00:37:36.471 ANA Persistent Loss State : Supported 00:37:36.471 ANA Change State : Supported 00:37:36.471 ANAGRPID is not changed : No 00:37:36.471 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:37:36.471 00:37:36.471 ANA Group Identifier Maximum : 128 00:37:36.471 Number of ANA Group Identifiers : 128 00:37:36.471 Max Number of Allowed Namespaces : 1024 00:37:36.471 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:37:36.471 Command Effects Log Page: Supported 00:37:36.471 Get Log Page Extended Data: Supported 00:37:36.471 Telemetry Log Pages: Not Supported 00:37:36.471 Persistent Event Log Pages: Not Supported 00:37:36.471 Supported Log Pages Log Page: May Support 00:37:36.471 Commands Supported & Effects Log Page: Not Supported 00:37:36.471 Feature Identifiers & Effects Log Page:May Support 00:37:36.471 NVMe-MI Commands & Effects Log Page: May Support 00:37:36.471 Data Area 4 for Telemetry Log: Not Supported 00:37:36.471 Error Log Page Entries Supported: 128 00:37:36.471 Keep Alive: Supported 00:37:36.471 Keep Alive Granularity: 1000 ms 00:37:36.471 00:37:36.471 NVM Command Set Attributes 00:37:36.471 ========================== 00:37:36.471 Submission Queue Entry Size 00:37:36.471 Max: 64 00:37:36.471 Min: 64 00:37:36.471 Completion Queue Entry Size 00:37:36.471 Max: 16 00:37:36.471 Min: 16 00:37:36.471 Number of Namespaces: 1024 00:37:36.471 Compare Command: Not Supported 00:37:36.471 Write Uncorrectable Command: Not Supported 00:37:36.471 Dataset Management Command: Supported 00:37:36.471 Write Zeroes Command: Supported 00:37:36.471 Set Features Save Field: Not Supported 00:37:36.471 Reservations: Not Supported 00:37:36.471 Timestamp: Not Supported 00:37:36.471 Copy: Not Supported 00:37:36.471 Volatile Write Cache: Present 00:37:36.471 Atomic Write Unit (Normal): 1 00:37:36.471 Atomic Write Unit (PFail): 1 00:37:36.471 Atomic Compare & Write Unit: 1 00:37:36.471 Fused Compare & Write: Not Supported 00:37:36.471 Scatter-Gather List 00:37:36.471 SGL Command Set: Supported 00:37:36.471 SGL Keyed: Not Supported 00:37:36.471 SGL Bit Bucket Descriptor: Not Supported 00:37:36.471 SGL Metadata Pointer: Not Supported 00:37:36.471 Oversized SGL: Not Supported 00:37:36.471 SGL Metadata Address: Not Supported 00:37:36.471 SGL Offset: Supported 00:37:36.471 Transport SGL Data Block: Not Supported 00:37:36.471 Replay Protected Memory Block: Not Supported 00:37:36.471 00:37:36.471 Firmware Slot Information 00:37:36.471 ========================= 00:37:36.471 Active slot: 0 00:37:36.471 00:37:36.471 Asymmetric Namespace Access 00:37:36.471 =========================== 00:37:36.471 Change Count : 0 00:37:36.472 Number of ANA Group Descriptors : 1 00:37:36.472 ANA Group Descriptor : 0 00:37:36.472 ANA Group ID : 1 00:37:36.472 Number of NSID Values : 1 00:37:36.472 Change Count : 0 00:37:36.472 ANA State : 1 00:37:36.472 Namespace Identifier : 1 00:37:36.472 00:37:36.472 Commands Supported and Effects 00:37:36.472 ============================== 00:37:36.472 Admin Commands 00:37:36.472 -------------- 00:37:36.472 Get Log Page (02h): Supported 00:37:36.472 Identify (06h): Supported 00:37:36.472 Abort (08h): Supported 00:37:36.472 Set Features (09h): Supported 00:37:36.472 Get Features (0Ah): Supported 00:37:36.472 Asynchronous Event Request (0Ch): Supported 00:37:36.472 Keep Alive (18h): Supported 00:37:36.472 I/O Commands 00:37:36.472 ------------ 00:37:36.472 Flush (00h): Supported 00:37:36.472 Write (01h): Supported LBA-Change 00:37:36.472 Read (02h): Supported 00:37:36.472 Write Zeroes (08h): Supported LBA-Change 00:37:36.472 Dataset Management (09h): Supported 00:37:36.472 00:37:36.472 Error Log 00:37:36.472 ========= 00:37:36.472 Entry: 0 00:37:36.472 Error Count: 0x3 00:37:36.472 Submission Queue Id: 0x0 00:37:36.472 Command Id: 0x5 00:37:36.472 Phase Bit: 0 00:37:36.472 Status Code: 0x2 00:37:36.472 Status Code Type: 0x0 00:37:36.472 Do Not Retry: 1 00:37:36.472 Error Location: 0x28 00:37:36.472 LBA: 0x0 00:37:36.472 Namespace: 0x0 00:37:36.472 Vendor Log Page: 0x0 00:37:36.472 ----------- 00:37:36.472 Entry: 1 00:37:36.472 Error Count: 0x2 00:37:36.472 Submission Queue Id: 0x0 00:37:36.472 Command Id: 0x5 00:37:36.472 Phase Bit: 0 00:37:36.472 Status Code: 0x2 00:37:36.472 Status Code Type: 0x0 00:37:36.472 Do Not Retry: 1 00:37:36.472 Error Location: 0x28 00:37:36.472 LBA: 0x0 00:37:36.472 Namespace: 0x0 00:37:36.472 Vendor Log Page: 0x0 00:37:36.472 ----------- 00:37:36.472 Entry: 2 00:37:36.472 Error Count: 0x1 00:37:36.472 Submission Queue Id: 0x0 00:37:36.472 Command Id: 0x4 00:37:36.472 Phase Bit: 0 00:37:36.472 Status Code: 0x2 00:37:36.472 Status Code Type: 0x0 00:37:36.472 Do Not Retry: 1 00:37:36.472 Error Location: 0x28 00:37:36.472 LBA: 0x0 00:37:36.472 Namespace: 0x0 00:37:36.472 Vendor Log Page: 0x0 00:37:36.472 00:37:36.472 Number of Queues 00:37:36.472 ================ 00:37:36.472 Number of I/O Submission Queues: 128 00:37:36.472 Number of I/O Completion Queues: 128 00:37:36.472 00:37:36.472 ZNS Specific Controller Data 00:37:36.472 ============================ 00:37:36.472 Zone Append Size Limit: 0 00:37:36.472 00:37:36.472 00:37:36.472 Active Namespaces 00:37:36.472 ================= 00:37:36.472 get_feature(0x05) failed 00:37:36.472 Namespace ID:1 00:37:36.472 Command Set Identifier: NVM (00h) 00:37:36.472 Deallocate: Supported 00:37:36.472 Deallocated/Unwritten Error: Not Supported 00:37:36.472 Deallocated Read Value: Unknown 00:37:36.472 Deallocate in Write Zeroes: Not Supported 00:37:36.472 Deallocated Guard Field: 0xFFFF 00:37:36.472 Flush: Supported 00:37:36.472 Reservation: Not Supported 00:37:36.472 Namespace Sharing Capabilities: Multiple Controllers 00:37:36.472 Size (in LBAs): 3125627568 (1490GiB) 00:37:36.472 Capacity (in LBAs): 3125627568 (1490GiB) 00:37:36.472 Utilization (in LBAs): 3125627568 (1490GiB) 00:37:36.472 UUID: a079805e-6201-428d-a0f4-0f2794798134 00:37:36.472 Thin Provisioning: Not Supported 00:37:36.472 Per-NS Atomic Units: Yes 00:37:36.472 Atomic Boundary Size (Normal): 0 00:37:36.472 Atomic Boundary Size (PFail): 0 00:37:36.472 Atomic Boundary Offset: 0 00:37:36.472 NGUID/EUI64 Never Reused: No 00:37:36.472 ANA group ID: 1 00:37:36.472 Namespace Write Protected: No 00:37:36.472 Number of LBA Formats: 1 00:37:36.472 Current LBA Format: LBA Format #00 00:37:36.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:37:36.472 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:36.472 rmmod nvme_tcp 00:37:36.472 rmmod nvme_fabrics 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.472 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:36.732 14:05:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.638 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:38.638 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:38.638 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:38.638 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:38.639 14:05:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:41.930 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:41.930 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:42.189 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:44.098 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:37:44.098 00:37:44.098 real 0m19.145s 00:37:44.098 user 0m4.457s 00:37:44.098 sys 0m10.196s 00:37:44.098 14:05:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:44.098 14:05:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:44.098 ************************************ 00:37:44.098 END TEST nvmf_identify_kernel_target 00:37:44.098 ************************************ 00:37:44.098 14:05:36 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:44.098 14:05:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:44.098 14:05:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:44.098 14:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:44.098 ************************************ 00:37:44.098 START TEST nvmf_auth_host 00:37:44.098 ************************************ 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:37:44.098 * Looking for test storage... 00:37:44.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:37:44.098 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:37:44.099 14:05:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:50.662 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:50.662 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:50.662 Found net devices under 0000:af:00.0: cvl_0_0 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:50.662 Found net devices under 0000:af:00.1: cvl_0_1 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.662 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:50.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:37:50.921 00:37:50.921 --- 10.0.0.2 ping statistics --- 00:37:50.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.921 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:37:50.921 00:37:50.921 --- 10.0.0.1 ping statistics --- 00:37:50.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.921 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:50.921 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1645491 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1645491 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1645491 ']' 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:37:50.922 14:05:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bab00b54dfedbe6d78f64d1255f05f74 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BC2 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bab00b54dfedbe6d78f64d1255f05f74 0 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bab00b54dfedbe6d78f64d1255f05f74 0 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bab00b54dfedbe6d78f64d1255f05f74 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:51.860 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BC2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BC2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BC2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc38205a038c79278915a6aaf336962945cadf3e6feffd15af04ad71a755c71a 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zuo 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc38205a038c79278915a6aaf336962945cadf3e6feffd15af04ad71a755c71a 3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc38205a038c79278915a6aaf336962945cadf3e6feffd15af04ad71a755c71a 3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc38205a038c79278915a6aaf336962945cadf3e6feffd15af04ad71a755c71a 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zuo 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zuo 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.zuo 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d1b157d97917779afcc6d65e4a309aa065c4f38f7b2e947 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dz3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d1b157d97917779afcc6d65e4a309aa065c4f38f7b2e947 0 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d1b157d97917779afcc6d65e4a309aa065c4f38f7b2e947 0 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d1b157d97917779afcc6d65e4a309aa065c4f38f7b2e947 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dz3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dz3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Dz3 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ec7e0ab7f34c548e15790d0c8777c080edcb9f1ea4e069e4 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hsm 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ec7e0ab7f34c548e15790d0c8777c080edcb9f1ea4e069e4 2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ec7e0ab7f34c548e15790d0c8777c080edcb9f1ea4e069e4 2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ec7e0ab7f34c548e15790d0c8777c080edcb9f1ea4e069e4 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:52.119 14:05:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hsm 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hsm 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Hsm 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:52.119 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:52.379 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a4306aafb6331b4ce09630a5efd5fc3 00:37:52.379 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:52.379 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oxO 00:37:52.379 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a4306aafb6331b4ce09630a5efd5fc3 1 00:37:52.379 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a4306aafb6331b4ce09630a5efd5fc3 1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a4306aafb6331b4ce09630a5efd5fc3 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oxO 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oxO 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oxO 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9c01f23256e1ebad0b554e7e3f7d92d 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9GP 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9c01f23256e1ebad0b554e7e3f7d92d 1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9c01f23256e1ebad0b554e7e3f7d92d 1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9c01f23256e1ebad0b554e7e3f7d92d 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9GP 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9GP 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9GP 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f49ca2b59e697b201de10fb2153c2a3d6e2ee29c9dff19a5 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.70e 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f49ca2b59e697b201de10fb2153c2a3d6e2ee29c9dff19a5 2 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f49ca2b59e697b201de10fb2153c2a3d6e2ee29c9dff19a5 2 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f49ca2b59e697b201de10fb2153c2a3d6e2ee29c9dff19a5 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.70e 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.70e 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.70e 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=15d5e9f5caa6793eba8979968fed14de 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GhB 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 15d5e9f5caa6793eba8979968fed14de 0 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 15d5e9f5caa6793eba8979968fed14de 0 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=15d5e9f5caa6793eba8979968fed14de 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:37:52.380 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GhB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GhB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GhB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=329b8445534db94d7b860763e8692a428638ad0f0e4a07533e6edba637bd1bf3 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4UB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 329b8445534db94d7b860763e8692a428638ad0f0e4a07533e6edba637bd1bf3 3 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 329b8445534db94d7b860763e8692a428638ad0f0e4a07533e6edba637bd1bf3 3 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=329b8445534db94d7b860763e8692a428638ad0f0e4a07533e6edba637bd1bf3 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4UB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4UB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.4UB 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1645491 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1645491 ']' 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:52.658 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BC2 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.zuo ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zuo 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Dz3 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Hsm ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hsm 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oxO 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9GP ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9GP 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.70e 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GhB ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GhB 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.4UB 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.928 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:52.929 14:05:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:56.218 Waiting for block devices as requested 00:37:56.218 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:56.218 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:56.477 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:56.477 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:56.477 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:56.736 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:56.736 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:56.736 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:56.995 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:56.995 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:56.995 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:57.932 14:05:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:57.933 No valid GPT data, bailing 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:37:57.933 00:37:57.933 Discovery Log Number of Records 2, Generation counter 2 00:37:57.933 =====Discovery Log Entry 0====== 00:37:57.933 trtype: tcp 00:37:57.933 adrfam: ipv4 00:37:57.933 subtype: current discovery subsystem 00:37:57.933 treq: not specified, sq flow control disable supported 00:37:57.933 portid: 1 00:37:57.933 trsvcid: 4420 00:37:57.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:57.933 traddr: 10.0.0.1 00:37:57.933 eflags: none 00:37:57.933 sectype: none 00:37:57.933 =====Discovery Log Entry 1====== 00:37:57.933 trtype: tcp 00:37:57.933 adrfam: ipv4 00:37:57.933 subtype: nvme subsystem 00:37:57.933 treq: not specified, sq flow control disable supported 00:37:57.933 portid: 1 00:37:57.933 trsvcid: 4420 00:37:57.933 subnqn: nqn.2024-02.io.spdk:cnode0 00:37:57.933 traddr: 10.0.0.1 00:37:57.933 eflags: none 00:37:57.933 sectype: none 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:57.933 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.193 nvme0n1 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.193 14:05:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.193 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.453 nvme0n1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.453 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.712 nvme0n1 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:58.712 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.713 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.972 nvme0n1 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:37:58.972 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.973 nvme0n1 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:58.973 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.233 14:05:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.233 nvme0n1 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.233 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.234 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 nvme0n1 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.492 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.493 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.752 nvme0n1 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.752 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.012 nvme0n1 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.012 14:05:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.272 nvme0n1 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.272 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.532 nvme0n1 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.532 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.792 nvme0n1 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.792 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.051 14:05:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.052 14:05:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:01.052 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.052 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.311 nvme0n1 00:38:01.311 14:05:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.311 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.571 nvme0n1 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.571 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.572 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.831 nvme0n1 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:01.831 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.090 14:05:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.350 nvme0n1 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.350 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.919 nvme0n1 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:02.919 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.920 14:05:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.179 nvme0n1 00:38:03.179 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.179 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:03.179 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:03.179 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.180 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.180 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.440 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.700 nvme0n1 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.700 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:03.959 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.960 14:05:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.220 nvme0n1 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.220 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.479 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.480 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.739 nvme0n1 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:04.739 14:05:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.677 nvme0n1 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:05.677 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:05.678 14:05:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.246 nvme0n1 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.246 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.506 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.075 nvme0n1 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:07.075 14:05:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.013 nvme0n1 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:08.013 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.014 14:06:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 nvme0n1 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.582 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.842 nvme0n1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.842 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.102 nvme0n1 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.102 14:06:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.363 nvme0n1 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.363 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.624 nvme0n1 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:09.624 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.625 nvme0n1 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.625 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.884 nvme0n1 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:09.884 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:10.143 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.144 nvme0n1 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.144 14:06:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.144 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.144 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.144 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.144 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.144 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:10.406 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.407 nvme0n1 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.407 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.718 nvme0n1 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.718 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.990 nvme0n1 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:10.990 14:06:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.249 nvme0n1 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.249 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.508 nvme0n1 00:38:11.508 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:11.767 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:11.768 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.027 nvme0n1 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.027 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.028 14:06:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.287 nvme0n1 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:12.287 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.288 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.548 nvme0n1 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:12.548 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:12.808 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.067 nvme0n1 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.067 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.068 14:06:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.637 nvme0n1 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.637 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.206 nvme0n1 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.206 14:06:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.776 nvme0n1 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:14.776 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.777 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.036 nvme0n1 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.036 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.295 14:06:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.863 nvme0n1 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.863 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:15.864 14:06:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.802 nvme0n1 00:38:16.802 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.802 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:16.802 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.803 14:06:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.374 nvme0n1 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.374 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:17.633 14:06:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.202 nvme0n1 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:18.202 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 nvme0n1 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.139 14:06:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 nvme0n1 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.139 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.399 nvme0n1 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:19.399 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.400 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:19.400 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.400 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.659 nvme0n1 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:19.659 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.660 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.919 nvme0n1 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.919 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.179 nvme0n1 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.179 14:06:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.439 nvme0n1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.439 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.699 nvme0n1 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:38:20.699 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.700 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.959 nvme0n1 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:20.959 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:20.960 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.220 nvme0n1 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.220 14:06:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.481 nvme0n1 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.481 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.741 nvme0n1 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:21.741 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:21.742 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.001 nvme0n1 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.001 14:06:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.261 nvme0n1 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.261 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.521 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.780 nvme0n1 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.780 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 nvme0n1 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.040 14:06:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.607 nvme0n1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.607 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.175 nvme0n1 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:24.175 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.176 14:06:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.435 nvme0n1 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.435 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.694 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.953 nvme0n1 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.953 14:06:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.519 nvme0n1 00:38:25.519 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.519 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.519 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.519 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFiMDBiNTRkZmVkYmU2ZDc4ZjY0ZDEyNTVmMDVmNzTXowSc: 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2MzODIwNWEwMzhjNzkyNzg5MTVhNmFhZjMzNjk2Mjk0NWNhZGYzZTZmZWZmZDE1YWYwNGFkNzFhNzU1YzcxYaq/LyI=: 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.520 14:06:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.455 nvme0n1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.455 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.022 nvme0n1 00:38:27.022 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGE0MzA2YWFmYjYzMzFiNGNlMDk2MzBhNWVmZDVmYzNKFLap: 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTljMDFmMjMyNTZlMWViYWQwYjU1NGU3ZTNmN2Q5MmTD0qDT: 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.023 14:06:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.016 nvme0n1 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.016 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjQ5Y2EyYjU5ZTY5N2IyMDFkZTEwZmIyMTUzYzJhM2Q2ZTJlZTI5YzlkZmYxOWE1po9F6A==: 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTVkNWU5ZjVjYWE2NzkzZWJhODk3OTk2OGZlZDE0ZGVRaZNY: 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.017 14:06:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.583 nvme0n1 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzI5Yjg0NDU1MzRkYjk0ZDdiODYwNzYzZTg2OTJhNDI4NjM4YWQwZjBlNGEwNzUzM2U2ZWRiYTYzN2JkMWJmM8GBYW0=: 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.583 14:06:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 nvme0n1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWQxYjE1N2Q5NzkxNzc3OWFmY2M2ZDY1ZTRhMzA5YWEwNjVjNGYzOGY3YjJlOTQ3qb0vfw==: 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3ZTBhYjdmMzRjNTQ4ZTE1NzkwZDBjODc3N2MwODBlZGNiOWYxZWE0ZTA2OWU05VAYVQ==: 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 request: 00:38:29.520 { 00:38:29.520 "name": "nvme0", 00:38:29.520 "trtype": "tcp", 00:38:29.520 "traddr": "10.0.0.1", 00:38:29.520 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:29.520 "adrfam": "ipv4", 00:38:29.520 "trsvcid": "4420", 00:38:29.520 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:29.520 "method": "bdev_nvme_attach_controller", 00:38:29.520 "req_id": 1 00:38:29.520 } 00:38:29.520 Got JSON-RPC error response 00:38:29.520 response: 00:38:29.520 { 00:38:29.520 "code": -5, 00:38:29.520 "message": "Input/output error" 00:38:29.520 } 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.520 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.779 request: 00:38:29.779 { 00:38:29.779 "name": "nvme0", 00:38:29.779 "trtype": "tcp", 00:38:29.779 "traddr": "10.0.0.1", 00:38:29.779 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:29.779 "adrfam": "ipv4", 00:38:29.779 "trsvcid": "4420", 00:38:29.779 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:29.779 "dhchap_key": "key2", 00:38:29.779 "method": "bdev_nvme_attach_controller", 00:38:29.779 "req_id": 1 00:38:29.779 } 00:38:29.779 Got JSON-RPC error response 00:38:29.779 response: 00:38:29.779 { 00:38:29.779 "code": -5, 00:38:29.779 "message": "Input/output error" 00:38:29.779 } 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.779 request: 00:38:29.779 { 00:38:29.779 "name": "nvme0", 00:38:29.779 "trtype": "tcp", 00:38:29.779 "traddr": "10.0.0.1", 00:38:29.779 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:29.779 "adrfam": "ipv4", 00:38:29.779 "trsvcid": "4420", 00:38:29.779 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:29.779 "dhchap_key": "key1", 00:38:29.779 "dhchap_ctrlr_key": "ckey2", 00:38:29.779 "method": "bdev_nvme_attach_controller", 00:38:29.779 "req_id": 1 00:38:29.779 } 00:38:29.779 Got JSON-RPC error response 00:38:29.779 response: 00:38:29.779 { 00:38:29.779 "code": -5, 00:38:29.779 "message": "Input/output error" 00:38:29.779 } 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:29.779 rmmod nvme_tcp 00:38:29.779 rmmod nvme_fabrics 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1645491 ']' 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1645491 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1645491 ']' 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1645491 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1645491 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1645491' 00:38:29.779 killing process with pid 1645491 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1645491 00:38:29.779 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1645491 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:30.038 14:06:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:32.575 14:06:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:35.112 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:35.371 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:37.277 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:37.277 14:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BC2 /tmp/spdk.key-null.Dz3 /tmp/spdk.key-sha256.oxO /tmp/spdk.key-sha384.70e /tmp/spdk.key-sha512.4UB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:38:37.277 14:06:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:40.570 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:38:40.570 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:40.570 00:38:40.570 real 0m56.594s 00:38:40.570 user 0m48.031s 00:38:40.570 sys 0m15.168s 00:38:40.570 14:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:40.570 14:06:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.570 ************************************ 00:38:40.570 END TEST nvmf_auth_host 00:38:40.570 ************************************ 00:38:40.570 14:06:33 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:38:40.570 14:06:33 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:40.570 14:06:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:40.570 14:06:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:40.570 14:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:40.570 ************************************ 00:38:40.570 START TEST nvmf_digest 00:38:40.570 ************************************ 00:38:40.570 14:06:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:38:40.830 * Looking for test storage... 00:38:40.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:40.830 14:06:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.831 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:40.831 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:40.831 14:06:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:38:40.831 14:06:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:47.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:47.401 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:47.401 Found net devices under 0000:af:00.0: cvl_0_0 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:47.401 Found net devices under 0000:af:00.1: cvl_0_1 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.401 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:47.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:38:47.661 00:38:47.661 --- 10.0.0.2 ping statistics --- 00:38:47.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.661 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:38:47.661 00:38:47.661 --- 10.0.0.1 ping statistics --- 00:38:47.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.661 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:38:47.661 ************************************ 00:38:47.661 START TEST nvmf_digest_clean 00:38:47.661 ************************************ 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:38:47.661 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1660246 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1660246 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1660246 ']' 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:47.662 14:06:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:47.921 [2024-06-11 14:06:40.595355] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:47.922 [2024-06-11 14:06:40.595423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.922 EAL: No free 2048 kB hugepages reported on node 1 00:38:47.922 [2024-06-11 14:06:40.706442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.922 [2024-06-11 14:06:40.790900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.922 [2024-06-11 14:06:40.790940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.922 [2024-06-11 14:06:40.790953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.922 [2024-06-11 14:06:40.790965] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.922 [2024-06-11 14:06:40.790975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.922 [2024-06-11 14:06:40.791015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:48.859 null0 00:38:48.859 [2024-06-11 14:06:41.634796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.859 [2024-06-11 14:06:41.659035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:38:48.859 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1660432 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1660432 /var/tmp/bperf.sock 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1660432 ']' 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:48.860 14:06:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:48.860 [2024-06-11 14:06:41.714724] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:48.860 [2024-06-11 14:06:41.714784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660432 ] 00:38:48.860 EAL: No free 2048 kB hugepages reported on node 1 00:38:49.119 [2024-06-11 14:06:41.807026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.119 [2024-06-11 14:06:41.894280] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:50.056 14:06:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:50.626 nvme0n1 00:38:50.626 14:06:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:50.626 14:06:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:50.626 Running I/O for 2 seconds... 00:38:53.164 00:38:53.164 Latency(us) 00:38:53.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.164 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:53.164 nvme0n1 : 2.00 20183.11 78.84 0.00 0.00 6333.96 2949.12 17196.65 00:38:53.164 =================================================================================================================== 00:38:53.164 Total : 20183.11 78.84 0.00 0.00 6333.96 2949.12 17196.65 00:38:53.164 0 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:53.164 | select(.opcode=="crc32c") 00:38:53.164 | "\(.module_name) \(.executed)"' 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1660432 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1660432 ']' 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1660432 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1660432 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1660432' 00:38:53.164 killing process with pid 1660432 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1660432 00:38:53.164 Received shutdown signal, test time was about 2.000000 seconds 00:38:53.164 00:38:53.164 Latency(us) 00:38:53.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:53.164 =================================================================================================================== 00:38:53.164 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1660432 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1661085 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1661085 /var/tmp/bperf.sock 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1661085 ']' 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:53.164 14:06:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:53.164 [2024-06-11 14:06:46.010156] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:53.164 [2024-06-11 14:06:46.010222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661085 ] 00:38:53.164 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:53.164 Zero copy mechanism will not be used. 00:38:53.164 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.425 [2024-06-11 14:06:46.101779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.425 [2024-06-11 14:06:46.177919] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:54.040 14:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:54.040 14:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:38:54.040 14:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:54.040 14:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:54.040 14:06:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:54.611 14:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:54.611 14:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:54.870 nvme0n1 00:38:54.870 14:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:54.870 14:06:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:54.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:38:54.870 Zero copy mechanism will not be used. 00:38:54.870 Running I/O for 2 seconds... 00:38:57.407 00:38:57.407 Latency(us) 00:38:57.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.407 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:38:57.407 nvme0n1 : 2.00 3688.80 461.10 0.00 0.00 4333.85 1140.33 16986.93 00:38:57.407 =================================================================================================================== 00:38:57.407 Total : 3688.80 461.10 0.00 0.00 4333.85 1140.33 16986.93 00:38:57.407 0 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:38:57.407 | select(.opcode=="crc32c") 00:38:57.407 | "\(.module_name) \(.executed)"' 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:38:57.407 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1661085 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1661085 ']' 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1661085 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:57.408 14:06:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1661085 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1661085' 00:38:57.408 killing process with pid 1661085 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1661085 00:38:57.408 Received shutdown signal, test time was about 2.000000 seconds 00:38:57.408 00:38:57.408 Latency(us) 00:38:57.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:57.408 =================================================================================================================== 00:38:57.408 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1661085 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1661883 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1661883 /var/tmp/bperf.sock 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1661883 ']' 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:57.408 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:38:57.408 [2024-06-11 14:06:50.282330] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:57.408 [2024-06-11 14:06:50.282399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661883 ] 00:38:57.668 EAL: No free 2048 kB hugepages reported on node 1 00:38:57.668 [2024-06-11 14:06:50.375139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.668 [2024-06-11 14:06:50.451468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.668 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:57.668 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:38:57.668 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:38:57.668 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:38:57.668 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:57.928 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:57.928 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:38:58.187 nvme0n1 00:38:58.187 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:38:58.187 14:06:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:58.187 Running I/O for 2 seconds... 00:39:00.723 00:39:00.723 Latency(us) 00:39:00.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.723 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.723 nvme0n1 : 2.01 20086.95 78.46 0.00 0.00 6358.18 5793.38 14575.21 00:39:00.723 =================================================================================================================== 00:39:00.723 Total : 20086.95 78.46 0.00 0.00 6358.18 5793.38 14575.21 00:39:00.723 0 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:00.723 | select(.opcode=="crc32c") 00:39:00.723 | "\(.module_name) \(.executed)"' 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1661883 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1661883 ']' 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1661883 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1661883 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1661883' 00:39:00.723 killing process with pid 1661883 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1661883 00:39:00.723 Received shutdown signal, test time was about 2.000000 seconds 00:39:00.723 00:39:00.723 Latency(us) 00:39:00.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.723 =================================================================================================================== 00:39:00.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1661883 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1662424 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1662424 /var/tmp/bperf.sock 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1662424 ']' 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:00.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:00.723 14:06:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:00.723 [2024-06-11 14:06:53.617657] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:00.723 [2024-06-11 14:06:53.617723] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662424 ] 00:39:00.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:00.723 Zero copy mechanism will not be used. 00:39:00.982 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.982 [2024-06-11 14:06:53.709875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.982 [2024-06-11 14:06:53.789817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.550 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:01.550 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:01.550 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:01.550 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:01.550 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:01.810 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:01.810 14:06:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:02.377 nvme0n1 00:39:02.377 14:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:02.378 14:06:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:02.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:02.378 Zero copy mechanism will not be used. 00:39:02.378 Running I/O for 2 seconds... 00:39:04.284 00:39:04.284 Latency(us) 00:39:04.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.284 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:04.284 nvme0n1 : 2.00 4077.00 509.63 0.00 0.00 3916.44 2922.91 15518.92 00:39:04.284 =================================================================================================================== 00:39:04.284 Total : 4077.00 509.63 0.00 0.00 3916.44 2922.91 15518.92 00:39:04.284 0 00:39:04.284 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:04.284 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:04.284 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:04.284 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:04.284 | select(.opcode=="crc32c") 00:39:04.284 | "\(.module_name) \(.executed)"' 00:39:04.284 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1662424 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1662424 ']' 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1662424 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:04.542 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1662424 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1662424' 00:39:04.801 killing process with pid 1662424 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1662424 00:39:04.801 Received shutdown signal, test time was about 2.000000 seconds 00:39:04.801 00:39:04.801 Latency(us) 00:39:04.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.801 =================================================================================================================== 00:39:04.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1662424 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1660246 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1660246 ']' 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1660246 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:04.801 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1660246 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1660246' 00:39:05.060 killing process with pid 1660246 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1660246 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1660246 00:39:05.060 00:39:05.060 real 0m17.395s 00:39:05.060 user 0m33.446s 00:39:05.060 sys 0m5.078s 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:05.060 14:06:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:05.060 ************************************ 00:39:05.060 END TEST nvmf_digest_clean 00:39:05.060 ************************************ 00:39:05.319 14:06:57 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:39:05.319 14:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:05.319 14:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:05.319 14:06:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:05.319 ************************************ 00:39:05.319 START TEST nvmf_digest_error 00:39:05.319 ************************************ 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1663100 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1663100 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1663100 ']' 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:05.319 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:05.319 [2024-06-11 14:06:58.074914] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:05.319 [2024-06-11 14:06:58.074973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:05.319 EAL: No free 2048 kB hugepages reported on node 1 00:39:05.319 [2024-06-11 14:06:58.183000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.578 [2024-06-11 14:06:58.268347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:05.578 [2024-06-11 14:06:58.268388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:05.578 [2024-06-11 14:06:58.268400] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:05.578 [2024-06-11 14:06:58.268412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:05.578 [2024-06-11 14:06:58.268422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:05.578 [2024-06-11 14:06:58.268455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.146 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:06.146 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:06.146 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:06.146 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:06.146 14:06:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.146 [2024-06-11 14:06:59.026756] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:06.146 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.405 null0 00:39:06.405 [2024-06-11 14:06:59.123918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:06.405 [2024-06-11 14:06:59.148140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:06.405 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:06.405 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:39:06.405 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1663287 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1663287 /var/tmp/bperf.sock 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1663287 ']' 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:06.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:06.406 14:06:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:06.406 [2024-06-11 14:06:59.200418] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:06.406 [2024-06-11 14:06:59.200482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663287 ] 00:39:06.406 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.406 [2024-06-11 14:06:59.292623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.664 [2024-06-11 14:06:59.379125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:07.232 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:07.232 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:07.232 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:07.232 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:07.491 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:07.749 nvme0n1 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:08.009 14:07:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:08.009 Running I/O for 2 seconds... 00:39:08.009 [2024-06-11 14:07:00.799013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.799056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.799074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.813470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.813508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.813524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.824904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.824934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.824950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.838061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.838090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.838105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.851379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.851408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.851423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.862243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.862271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.862291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.876358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.876402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.887930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.887958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.887973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.901910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.901936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.009 [2024-06-11 14:07:00.914515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.009 [2024-06-11 14:07:00.914542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.009 [2024-06-11 14:07:00.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.927452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.927486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.927501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.938222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.938250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.938265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.953001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.953030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.953045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.965986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.966014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.966029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.977634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.977666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.977680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:00.990071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:00.990100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:00.990115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.002367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.002394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.002409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.015489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.015516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.015530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.028353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.028380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.040378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.040406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.040420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.053313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.053342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.053357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.066677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.066704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.066719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.078946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.078975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.078989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.092096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.092124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.092139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.104175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.104204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.104218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.117214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.117242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.117257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.130938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.269 [2024-06-11 14:07:01.130966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.269 [2024-06-11 14:07:01.130981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.269 [2024-06-11 14:07:01.141932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.270 [2024-06-11 14:07:01.141960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.270 [2024-06-11 14:07:01.141974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.270 [2024-06-11 14:07:01.154928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.270 [2024-06-11 14:07:01.154956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.270 [2024-06-11 14:07:01.154971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.270 [2024-06-11 14:07:01.167689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.270 [2024-06-11 14:07:01.167718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.270 [2024-06-11 14:07:01.167733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.529 [2024-06-11 14:07:01.180786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.529 [2024-06-11 14:07:01.180814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.529 [2024-06-11 14:07:01.180829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.529 [2024-06-11 14:07:01.192430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.529 [2024-06-11 14:07:01.192457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.529 [2024-06-11 14:07:01.192485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.529 [2024-06-11 14:07:01.205503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.529 [2024-06-11 14:07:01.205531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.529 [2024-06-11 14:07:01.205545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.529 [2024-06-11 14:07:01.219313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.529 [2024-06-11 14:07:01.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.529 [2024-06-11 14:07:01.219356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.529 [2024-06-11 14:07:01.229995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.529 [2024-06-11 14:07:01.230024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.529 [2024-06-11 14:07:01.230038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.243816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.243844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.243859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.256206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.256233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.256248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.269818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.269846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.269860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.283095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.283123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.283138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.294783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.294811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.294825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.308328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.308356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.308372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.320283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.320311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.320326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.333527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.333555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.333570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.344990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.345019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.345033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.358546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.358575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.358589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.370066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.370093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.370108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.383854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.383882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.383897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.394427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.394455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.394470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.408014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.408042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.408061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.420956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.420984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.420999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.530 [2024-06-11 14:07:01.433949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.530 [2024-06-11 14:07:01.433977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.530 [2024-06-11 14:07:01.433992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.446666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.446695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.460245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.460273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.460287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.472164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.472191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.472207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.484041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.484069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.484083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.497177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.497207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.497221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.509773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.509801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.509816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.521812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.521845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.534049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.534076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.534091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.547132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.547160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.547175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.560671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.560699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.560713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.571518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.571546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.571561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.585959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.585987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.586002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.598647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.598676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.598691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.610781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.610808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.610824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.624529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.624556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.624571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.635574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.635602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.635616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.649113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.649141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.649155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.660729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.660757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.660771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.673423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.673450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.673465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:08.790 [2024-06-11 14:07:01.686740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:08.790 [2024-06-11 14:07:01.686767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:08.790 [2024-06-11 14:07:01.686782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.699078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.050 [2024-06-11 14:07:01.699119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.711522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.711550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.050 [2024-06-11 14:07:01.711564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.724458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.724493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.050 [2024-06-11 14:07:01.724508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.737630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.737656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.050 [2024-06-11 14:07:01.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.749154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.749181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.050 [2024-06-11 14:07:01.749196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.050 [2024-06-11 14:07:01.761805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.050 [2024-06-11 14:07:01.761833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.761848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.774760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.774787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.774802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.788957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.788984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.788999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.800207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.800234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.800248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.813788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.813816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.813830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.826708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.826735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.826750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.840045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.840072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.840087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.851585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.851617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.851631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.865429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.865456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.865471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.877405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.877431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.877446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.889422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.889449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.903364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.903391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.903405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.917086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.917114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.917128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.929674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.929701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.929715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.943629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.943659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.943674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.051 [2024-06-11 14:07:01.955054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.051 [2024-06-11 14:07:01.955084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.051 [2024-06-11 14:07:01.955099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:01.969411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:01.969440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:01.969455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:01.982140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:01.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:01.982183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:01.993623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:01.993651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:01.993666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.007099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.007128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.007143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.019358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.019402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.031973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.032001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.032016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.045192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.045220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.058984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.059011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.059025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.069821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.069848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.069867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.084698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.084725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.084740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.311 [2024-06-11 14:07:02.097465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.311 [2024-06-11 14:07:02.097500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.311 [2024-06-11 14:07:02.097515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.110266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.110294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.110309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.122963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.122991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.123006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.135672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.135699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.135714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.147756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.147783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.147798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.161119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.161162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.173378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.173405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.173420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.186398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.186426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.186441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.199945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.199972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.199987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.312 [2024-06-11 14:07:02.211647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.312 [2024-06-11 14:07:02.211674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.312 [2024-06-11 14:07:02.211689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.225561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.225588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.225603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.236655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.236682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.236697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.251847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.251875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.251889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.264533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.264560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.264575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.275724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.275752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.275767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.288905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.288933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.288951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.302400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.302427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.302442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.313901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.313929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.313943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.326027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.326054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.326068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.571 [2024-06-11 14:07:02.339085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.571 [2024-06-11 14:07:02.339112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.571 [2024-06-11 14:07:02.339127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.352216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.352243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.352258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.364423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.364450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.364464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.376906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.376933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.376948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.390550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.390577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.390592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.403709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.403741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.414529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.414557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.414571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.428093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.428121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.428135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.441909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.441937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.441952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.453556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.453583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.453598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.467107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.467135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.467149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.572 [2024-06-11 14:07:02.480205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.572 [2024-06-11 14:07:02.480232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.572 [2024-06-11 14:07:02.480246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.492881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.492909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.492923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.506206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.506234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.506249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.518814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.518843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.531845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.531872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.531886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.542772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.542801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.542816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.556444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.556471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.556493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.570350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.570377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.570392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.582306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.582333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.582347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.594863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.594891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.594906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.608636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.608663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.608678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.621096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.621124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.621142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.633592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.633620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.633635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.646741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.646769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.646784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.657982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.658008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.658023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.670776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.670804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.670820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.683742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.683769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.683783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.695634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.695661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.832 [2024-06-11 14:07:02.695676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.832 [2024-06-11 14:07:02.709422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.832 [2024-06-11 14:07:02.709449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.833 [2024-06-11 14:07:02.709464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.833 [2024-06-11 14:07:02.723165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.833 [2024-06-11 14:07:02.723193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.833 [2024-06-11 14:07:02.723208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:09.833 [2024-06-11 14:07:02.735666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:09.833 [2024-06-11 14:07:02.735698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:09.833 [2024-06-11 14:07:02.735713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.092 [2024-06-11 14:07:02.747116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:10.092 [2024-06-11 14:07:02.747144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.092 [2024-06-11 14:07:02.747159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.092 [2024-06-11 14:07:02.759331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:10.092 [2024-06-11 14:07:02.759359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.092 [2024-06-11 14:07:02.759373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.092 [2024-06-11 14:07:02.772348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:10.092 [2024-06-11 14:07:02.772376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.092 [2024-06-11 14:07:02.772391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.092 [2024-06-11 14:07:02.785273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa27b0) 00:39:10.092 [2024-06-11 14:07:02.785300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:10.092 [2024-06-11 14:07:02.785315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:10.092 00:39:10.092 Latency(us) 00:39:10.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:10.092 nvme0n1 : 2.00 20029.16 78.24 0.00 0.00 6382.63 2949.12 17301.50 00:39:10.092 =================================================================================================================== 00:39:10.092 Total : 20029.16 78.24 0.00 0.00 6382.63 2949.12 17301.50 00:39:10.092 0 00:39:10.092 14:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:10.092 14:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:10.092 14:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:10.092 14:07:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:10.092 | .driver_specific 00:39:10.092 | .nvme_error 00:39:10.092 | .status_code 00:39:10.092 | .command_transient_transport_error' 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1663287 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1663287 ']' 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1663287 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1663287 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1663287' 00:39:10.352 killing process with pid 1663287 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1663287 00:39:10.352 Received shutdown signal, test time was about 2.000000 seconds 00:39:10.352 00:39:10.352 Latency(us) 00:39:10.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.352 =================================================================================================================== 00:39:10.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:10.352 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1663287 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1664079 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1664079 /var/tmp/bperf.sock 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1664079 ']' 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:10.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:10.612 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:10.612 [2024-06-11 14:07:03.364619] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:10.612 [2024-06-11 14:07:03.364669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664079 ] 00:39:10.612 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:10.612 Zero copy mechanism will not be used. 00:39:10.612 EAL: No free 2048 kB hugepages reported on node 1 00:39:10.612 [2024-06-11 14:07:03.445702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.871 [2024-06-11 14:07:03.521448] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:10.871 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:10.871 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:10.871 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:10.871 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:11.130 14:07:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:11.390 nvme0n1 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:11.390 14:07:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:11.390 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:11.390 Zero copy mechanism will not be used. 00:39:11.390 Running I/O for 2 seconds... 00:39:11.390 [2024-06-11 14:07:04.213786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.213829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.213847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.226081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.226115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.226131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.235963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.236006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.244739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.244767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.244781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.253271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.253299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.253319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.261405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.261432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.390 [2024-06-11 14:07:04.261447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.390 [2024-06-11 14:07:04.269045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.390 [2024-06-11 14:07:04.269072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.391 [2024-06-11 14:07:04.269087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.391 [2024-06-11 14:07:04.276653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.391 [2024-06-11 14:07:04.276680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.391 [2024-06-11 14:07:04.276695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.391 [2024-06-11 14:07:04.284047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.391 [2024-06-11 14:07:04.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.391 [2024-06-11 14:07:04.284089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.391 [2024-06-11 14:07:04.291435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.391 [2024-06-11 14:07:04.291463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.391 [2024-06-11 14:07:04.291484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.391 [2024-06-11 14:07:04.298936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.391 [2024-06-11 14:07:04.298963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.391 [2024-06-11 14:07:04.298978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.306333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.306361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.306375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.313713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.313740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.313754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.321161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.321192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.321207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.328525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.328553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.328567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.335892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.335920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.335934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.343339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.343368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.343382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.350940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.350966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.350981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.358404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.358431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.358446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.365796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.365823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.365837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.373276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.373304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.373320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.380709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.380737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.380752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.388139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.388167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.388182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.395691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.395719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.395733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.403301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.403327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.403342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.410732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.410760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.410775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.418152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.418178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.418193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.425603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.425630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.425645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.433022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.433049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.433064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.440447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.440485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.440500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.447984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.448012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.448030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.455422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.455450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.455464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.462774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.462802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.462817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.470122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.470150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.470165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.477537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.477564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.477581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.484885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.651 [2024-06-11 14:07:04.484913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.651 [2024-06-11 14:07:04.484928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.651 [2024-06-11 14:07:04.492331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.492359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.492373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.499767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.499794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.499809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.507194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.507223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.507237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.514732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.514760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.514775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.522192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.522220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.522236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.529669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.529697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.529711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.537081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.537110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.537124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.544558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.544585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.544599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.552048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.552090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.652 [2024-06-11 14:07:04.559536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.652 [2024-06-11 14:07:04.559565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.652 [2024-06-11 14:07:04.559579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.566947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.566975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.574335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.574363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.574381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.581720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.581749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.581763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.589620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.589649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.599345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.599375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.599390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.608880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.608910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.608925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.619019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.619063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.629247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.912 [2024-06-11 14:07:04.629276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.912 [2024-06-11 14:07:04.629291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.912 [2024-06-11 14:07:04.639385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.639415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.639431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.650706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.650735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.650749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.661158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.661191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.661207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.670752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.670782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.670797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.681026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.681057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.681072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.691778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.691808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.691823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.701580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.701609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.701623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.712043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.712072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.712088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.722484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.722513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.722528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.732495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.732524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.732540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.742564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.742593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.742608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.752829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.752858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.752873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.763251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.763281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.763296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.774646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.774676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.774691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.785237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.785265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.795275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.795303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.795318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.804675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.804703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.804717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:11.913 [2024-06-11 14:07:04.814160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:11.913 [2024-06-11 14:07:04.814188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:11.913 [2024-06-11 14:07:04.814203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.824096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.824127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.824142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.833950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.833978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.833997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.843506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.843535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.853090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.853119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.853134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.862563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.862592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.862607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.870883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.870912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.870927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.879316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.879347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.879363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.888530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.888575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.898146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.898177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.898193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.907912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.907943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.907958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.919120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.919151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.919167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.930290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.930321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.930336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.940338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.940368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.940383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.949520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.949549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.949564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.959150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.959180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.959196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.968229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.976948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.976976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.976991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.986520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.986549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.986564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:04.995095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.177 [2024-06-11 14:07:04.995124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.177 [2024-06-11 14:07:04.995143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.177 [2024-06-11 14:07:05.004239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.004270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.004286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.013393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.013421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.013437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.021632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.021660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.021675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.029492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.029520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.029535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.037216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.037245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.044767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.044796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.052198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.052228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.052243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.059734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.059763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.059778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.067226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.067259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.067274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.074695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.074723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.074738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.178 [2024-06-11 14:07:05.082209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.178 [2024-06-11 14:07:05.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.178 [2024-06-11 14:07:05.082251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.089760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.089789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.089803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.097349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.097377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.097392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.104832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.104861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.104875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.112303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.112332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.112346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.119747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.119775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.119789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.127241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.127269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.127284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.134797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.134825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.134840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.142159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.142187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.142201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.149566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.149594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.149608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.157020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.157048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.157062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.164418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.164446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.164461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.171868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.171896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.171910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.179298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.179327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.179341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.186796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.186824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.186838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.194329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.194356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.194374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.201773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.483 [2024-06-11 14:07:05.201801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.483 [2024-06-11 14:07:05.201815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.483 [2024-06-11 14:07:05.209222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.209250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.209264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.216722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.216749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.216763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.224182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.224210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.224224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.231580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.231608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.231622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.238979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.239007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.239021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.246445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.246473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.246494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.253848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.253876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.253890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.261266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.261299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.261313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.268817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.268844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.276256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.276283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.276297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.283770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.283798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.283812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.291174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.291202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.291217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.298652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.298680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.298694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.306079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.306107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.306121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.313524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.313552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.313566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.321078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.321105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.321120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.328474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.328507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.328521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.335846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.335874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.335889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.343229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.343258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.343273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.350662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.350690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.350704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.484 [2024-06-11 14:07:05.358236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.484 [2024-06-11 14:07:05.358264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.484 [2024-06-11 14:07:05.358279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.365760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.365788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.365802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.373268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.373297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.373312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.380846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.380873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.380887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.388312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.388340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.388358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.395760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.395788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.395804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.403260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.403288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.403303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.410691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.410719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.410734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.418837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.418866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.418881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.428550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.428579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.428594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.438256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.438286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.438300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.447024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.447053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.447067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.456578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.456607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.456621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.465409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.465438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.465453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.474450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.474485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.483157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.483186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.483201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.491793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.491823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.491838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.500780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.500810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.500825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.508908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.508953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.517558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.745 [2024-06-11 14:07:05.517587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.745 [2024-06-11 14:07:05.517602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.745 [2024-06-11 14:07:05.525494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.525523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.525537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.533676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.533705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.533723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.541615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.541643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.541657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.549498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.549525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.549540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.557207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.557249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.564622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.564649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.564664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.572546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.572588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.584903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.584931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.584945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.595524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.595554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.595568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.606142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.606171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.606185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.615892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.615924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.615939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.625775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.625805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.625820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.636279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.636307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.636322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:12.746 [2024-06-11 14:07:05.647598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:12.746 [2024-06-11 14:07:05.647627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:12.746 [2024-06-11 14:07:05.647642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.660491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.660520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.672642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.672671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.672687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.683064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.683107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.693046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.693076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.693092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.701527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.701555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.710065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.710094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.710108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.718044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.718072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.718087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.725816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.725845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.738340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.738370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.738384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.749558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.749589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.749604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.759417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.759445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.759460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.768671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.768700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.006 [2024-06-11 14:07:05.768715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.006 [2024-06-11 14:07:05.777404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.006 [2024-06-11 14:07:05.777432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.777447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.785254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.785283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.793318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.793346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.793360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.803024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.803052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.803067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.814458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.814494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.814509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.824244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.824272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.824286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.833129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.833157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.833172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.843759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.843788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.854562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.854590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.854604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.864242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.864271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.864285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.873497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.873529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.873544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.884749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.884777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.884792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.896383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.896412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.896426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.007 [2024-06-11 14:07:05.908152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.007 [2024-06-11 14:07:05.908182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.007 [2024-06-11 14:07:05.908196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.268 [2024-06-11 14:07:05.921492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.268 [2024-06-11 14:07:05.921522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.268 [2024-06-11 14:07:05.921536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.268 [2024-06-11 14:07:05.933094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.268 [2024-06-11 14:07:05.933123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.268 [2024-06-11 14:07:05.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:05.946071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:05.946101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:05.946117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:05.959533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:05.959563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:05.959578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:05.972755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:05.972784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:05.972799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:05.983218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:05.983250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:05.983265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:05.993512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:05.993541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:05.993556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.003665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.003694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.003709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.012908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.012954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.022060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.022091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.022107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.031500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.031530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.031545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.041217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.041247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.041263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.051239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.051285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.061330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.061361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.061381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.070013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.070043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.070058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.078292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.078322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.078337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.086456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.086507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.094551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.094579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.102027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.102071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.109491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.109519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.109534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.117003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.117032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.117046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.124500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.124528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.124543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.132156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.132185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.132199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.139640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.139668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.139683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.147190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.147234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.154730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.154759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.162242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.162286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.269 [2024-06-11 14:07:06.169692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.269 [2024-06-11 14:07:06.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.269 [2024-06-11 14:07:06.169735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.530 [2024-06-11 14:07:06.177246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.530 [2024-06-11 14:07:06.177275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.530 [2024-06-11 14:07:06.177289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:13.530 [2024-06-11 14:07:06.184804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.530 [2024-06-11 14:07:06.184834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.530 [2024-06-11 14:07:06.184848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:13.530 [2024-06-11 14:07:06.192285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.530 [2024-06-11 14:07:06.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.530 [2024-06-11 14:07:06.192333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:13.530 [2024-06-11 14:07:06.199769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1365fe0) 00:39:13.530 [2024-06-11 14:07:06.199796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:13.530 [2024-06-11 14:07:06.199810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:13.530 00:39:13.530 Latency(us) 00:39:13.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.530 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:13.530 nvme0n1 : 2.00 3573.64 446.71 0.00 0.00 4473.73 1284.51 14365.49 00:39:13.530 =================================================================================================================== 00:39:13.530 Total : 3573.64 446.71 0.00 0.00 4473.73 1284.51 14365.49 00:39:13.530 0 00:39:13.530 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:13.530 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:13.530 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:13.530 | .driver_specific 00:39:13.530 | .nvme_error 00:39:13.530 | .status_code 00:39:13.530 | .command_transient_transport_error' 00:39:13.530 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 230 > 0 )) 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1664079 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1664079 ']' 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1664079 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1664079 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1664079' 00:39:13.790 killing process with pid 1664079 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1664079 00:39:13.790 Received shutdown signal, test time was about 2.000000 seconds 00:39:13.790 00:39:13.790 Latency(us) 00:39:13.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.790 =================================================================================================================== 00:39:13.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:13.790 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1664079 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1664629 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1664629 /var/tmp/bperf.sock 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1664629 ']' 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:14.050 14:07:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:14.050 [2024-06-11 14:07:06.779099] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:14.050 [2024-06-11 14:07:06.779164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664629 ] 00:39:14.050 EAL: No free 2048 kB hugepages reported on node 1 00:39:14.050 [2024-06-11 14:07:06.871039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.050 [2024-06-11 14:07:06.956663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:14.988 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:14.988 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:14.988 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:14.988 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:15.246 14:07:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:15.506 nvme0n1 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:15.506 14:07:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:15.506 Running I/O for 2 seconds... 00:39:15.506 [2024-06-11 14:07:08.363144] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f7da8 00:39:15.506 [2024-06-11 14:07:08.364484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.506 [2024-06-11 14:07:08.364522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:15.506 [2024-06-11 14:07:08.376069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fac10 00:39:15.506 [2024-06-11 14:07:08.377553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.506 [2024-06-11 14:07:08.377583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:39:15.506 [2024-06-11 14:07:08.387513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190eaab8 00:39:15.506 [2024-06-11 14:07:08.388425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.506 [2024-06-11 14:07:08.388452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.506 [2024-06-11 14:07:08.399516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e99d8 00:39:15.506 [2024-06-11 14:07:08.400553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.506 [2024-06-11 14:07:08.400580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.506 [2024-06-11 14:07:08.411882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e88f8 00:39:15.506 [2024-06-11 14:07:08.412897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.506 [2024-06-11 14:07:08.412924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.765 [2024-06-11 14:07:08.424057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7818 00:39:15.765 [2024-06-11 14:07:08.424982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.765 [2024-06-11 14:07:08.425009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.765 [2024-06-11 14:07:08.436223] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6738 00:39:15.765 [2024-06-11 14:07:08.437137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.765 [2024-06-11 14:07:08.437163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.765 [2024-06-11 14:07:08.448384] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:15.765 [2024-06-11 14:07:08.449324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.765 [2024-06-11 14:07:08.449351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.765 [2024-06-11 14:07:08.460544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f92c0 00:39:15.765 [2024-06-11 14:07:08.461469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.461505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.472721] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fa3a0 00:39:15.766 [2024-06-11 14:07:08.473741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.473767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.484873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fb480 00:39:15.766 [2024-06-11 14:07:08.485892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.485917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.497002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fc560 00:39:15.766 [2024-06-11 14:07:08.498018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.498044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.509139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ff3c8 00:39:15.766 [2024-06-11 14:07:08.510129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.510154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.521265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fe2e8 00:39:15.766 [2024-06-11 14:07:08.522282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.522307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.533382] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de470 00:39:15.766 [2024-06-11 14:07:08.534378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.534403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.545504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e2c28 00:39:15.766 [2024-06-11 14:07:08.546493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.546519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.557633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3d08 00:39:15.766 [2024-06-11 14:07:08.558637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.558663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.569754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4de8 00:39:15.766 [2024-06-11 14:07:08.570757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.570783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.581885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5ec8 00:39:15.766 [2024-06-11 14:07:08.582876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.582901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.594008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9e10 00:39:15.766 [2024-06-11 14:07:08.595004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.595030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.606116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e8d30 00:39:15.766 [2024-06-11 14:07:08.607110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.607135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.618226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7c50 00:39:15.766 [2024-06-11 14:07:08.619272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.619298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.630356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6b70 00:39:15.766 [2024-06-11 14:07:08.631357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.631383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.642475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f7da8 00:39:15.766 [2024-06-11 14:07:08.643465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.643494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.654589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f8e88 00:39:15.766 [2024-06-11 14:07:08.655578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.655603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:15.766 [2024-06-11 14:07:08.666705] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f9f68 00:39:15.766 [2024-06-11 14:07:08.667699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:15.766 [2024-06-11 14:07:08.667725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.026 [2024-06-11 14:07:08.678819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fb048 00:39:16.026 [2024-06-11 14:07:08.679810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.026 [2024-06-11 14:07:08.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.026 [2024-06-11 14:07:08.690933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fc128 00:39:16.026 [2024-06-11 14:07:08.691926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.026 [2024-06-11 14:07:08.691951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.026 [2024-06-11 14:07:08.703050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fdeb0 00:39:16.026 [2024-06-11 14:07:08.704065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.026 [2024-06-11 14:07:08.704090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.026 [2024-06-11 14:07:08.715172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fe720 00:39:16.026 [2024-06-11 14:07:08.716163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.716188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.727280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de038 00:39:16.027 [2024-06-11 14:07:08.728296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.728321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.739401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df118 00:39:16.027 [2024-06-11 14:07:08.740394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.740420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.751516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3060 00:39:16.027 [2024-06-11 14:07:08.752524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.752549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.763627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4140 00:39:16.027 [2024-06-11 14:07:08.764634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.764659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.775743] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5220 00:39:16.027 [2024-06-11 14:07:08.776733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.776762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.787873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190eaab8 00:39:16.027 [2024-06-11 14:07:08.788866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.788892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.799990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e99d8 00:39:16.027 [2024-06-11 14:07:08.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.801010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.812115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e88f8 00:39:16.027 [2024-06-11 14:07:08.813132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.813158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.824240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7818 00:39:16.027 [2024-06-11 14:07:08.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.825266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.836361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6738 00:39:16.027 [2024-06-11 14:07:08.837377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.837402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.848485] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:16.027 [2024-06-11 14:07:08.849472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.849501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.860598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f92c0 00:39:16.027 [2024-06-11 14:07:08.861586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.861611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.872707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fa3a0 00:39:16.027 [2024-06-11 14:07:08.873721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.873745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.884826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fb480 00:39:16.027 [2024-06-11 14:07:08.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.885871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.896938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fc560 00:39:16.027 [2024-06-11 14:07:08.897931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.897957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.909056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ff3c8 00:39:16.027 [2024-06-11 14:07:08.910054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.910079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.921165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fe2e8 00:39:16.027 [2024-06-11 14:07:08.922178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.922204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.027 [2024-06-11 14:07:08.933274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de470 00:39:16.027 [2024-06-11 14:07:08.934272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.027 [2024-06-11 14:07:08.934297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:08.945394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e2c28 00:39:16.287 [2024-06-11 14:07:08.946387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:08.946413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:08.957813] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3d08 00:39:16.287 [2024-06-11 14:07:08.958810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:08.958836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:08.969927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4de8 00:39:16.287 [2024-06-11 14:07:08.970921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:08.970946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:08.982038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5ec8 00:39:16.287 [2024-06-11 14:07:08.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:08.983058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:08.994159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9e10 00:39:16.287 [2024-06-11 14:07:08.995160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:08.995186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.006295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e8d30 00:39:16.287 [2024-06-11 14:07:09.007284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.007310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.018418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7c50 00:39:16.287 [2024-06-11 14:07:09.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.030538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6b70 00:39:16.287 [2024-06-11 14:07:09.031526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.031552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.042656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f7da8 00:39:16.287 [2024-06-11 14:07:09.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.043688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.054776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f8e88 00:39:16.287 [2024-06-11 14:07:09.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.055795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.066883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f9f68 00:39:16.287 [2024-06-11 14:07:09.067876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.067902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.078988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fb048 00:39:16.287 [2024-06-11 14:07:09.079998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.080024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.091137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fc128 00:39:16.287 [2024-06-11 14:07:09.092050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.103408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fdeb0 00:39:16.287 [2024-06-11 14:07:09.104424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.287 [2024-06-11 14:07:09.104450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.287 [2024-06-11 14:07:09.115555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fe720 00:39:16.287 [2024-06-11 14:07:09.116570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.116596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.127670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de038 00:39:16.288 [2024-06-11 14:07:09.128709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.128734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.139807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df118 00:39:16.288 [2024-06-11 14:07:09.140828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.140854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.151920] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3060 00:39:16.288 [2024-06-11 14:07:09.152918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.164034] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4140 00:39:16.288 [2024-06-11 14:07:09.165045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.165070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.176155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5220 00:39:16.288 [2024-06-11 14:07:09.177146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.177172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.288 [2024-06-11 14:07:09.188288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190eaab8 00:39:16.288 [2024-06-11 14:07:09.189300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.288 [2024-06-11 14:07:09.189326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.547 [2024-06-11 14:07:09.200399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e99d8 00:39:16.547 [2024-06-11 14:07:09.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.547 [2024-06-11 14:07:09.201422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.547 [2024-06-11 14:07:09.212526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e88f8 00:39:16.547 [2024-06-11 14:07:09.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.547 [2024-06-11 14:07:09.213545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.547 [2024-06-11 14:07:09.224657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7818 00:39:16.547 [2024-06-11 14:07:09.225643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.547 [2024-06-11 14:07:09.225668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.547 [2024-06-11 14:07:09.236779] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6738 00:39:16.547 [2024-06-11 14:07:09.237771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.547 [2024-06-11 14:07:09.237796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.547 [2024-06-11 14:07:09.248924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:16.547 [2024-06-11 14:07:09.249916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.249941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.261036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f92c0 00:39:16.548 [2024-06-11 14:07:09.262028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.262053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.273165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fa3a0 00:39:16.548 [2024-06-11 14:07:09.274181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.274206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.285308] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fb480 00:39:16.548 [2024-06-11 14:07:09.286312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.286338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.297432] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fc560 00:39:16.548 [2024-06-11 14:07:09.298413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.298440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.309578] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ff3c8 00:39:16.548 [2024-06-11 14:07:09.310497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.310523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.321722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fe2e8 00:39:16.548 [2024-06-11 14:07:09.322676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.322702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.333846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de470 00:39:16.548 [2024-06-11 14:07:09.334761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.334786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.345993] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e2c28 00:39:16.548 [2024-06-11 14:07:09.346921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.346947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.358156] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3d08 00:39:16.548 [2024-06-11 14:07:09.359050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.359075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.369455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f0ff8 00:39:16.548 [2024-06-11 14:07:09.370371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.370396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.382264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9168 00:39:16.548 [2024-06-11 14:07:09.383339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.383366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.395044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f0350 00:39:16.548 [2024-06-11 14:07:09.396314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.396340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.407839] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f2948 00:39:16.548 [2024-06-11 14:07:09.409253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.409279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.420610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6020 00:39:16.548 [2024-06-11 14:07:09.422314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.422340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.432007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9168 00:39:16.548 [2024-06-11 14:07:09.433084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.433110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.443999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e27f0 00:39:16.548 [2024-06-11 14:07:09.445069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.548 [2024-06-11 14:07:09.445094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.548 [2024-06-11 14:07:09.456169] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e38d0 00:39:16.808 [2024-06-11 14:07:09.457246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.457272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.468340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e49b0 00:39:16.808 [2024-06-11 14:07:09.469412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.469437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.480495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df988 00:39:16.808 [2024-06-11 14:07:09.481558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.481584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.492662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e0a68 00:39:16.808 [2024-06-11 14:07:09.493741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.504827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e1b48 00:39:16.808 [2024-06-11 14:07:09.505896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.505921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.517000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6890 00:39:16.808 [2024-06-11 14:07:09.518080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.518109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.529177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ef270 00:39:16.808 [2024-06-11 14:07:09.530283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.530310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.541376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ee190 00:39:16.808 [2024-06-11 14:07:09.542471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.542501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.553526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:16.808 [2024-06-11 14:07:09.554594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.808 [2024-06-11 14:07:09.554620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.808 [2024-06-11 14:07:09.565691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6738 00:39:16.809 [2024-06-11 14:07:09.566752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.566778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.577853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7818 00:39:16.809 [2024-06-11 14:07:09.578915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.590033] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df118 00:39:16.809 [2024-06-11 14:07:09.591096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.591123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.602197] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de038 00:39:16.809 [2024-06-11 14:07:09.603261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.603287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.614343] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ea680 00:39:16.809 [2024-06-11 14:07:09.615444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.615471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.626520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e95a0 00:39:16.809 [2024-06-11 14:07:09.627589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.627616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.638681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e84c0 00:39:16.809 [2024-06-11 14:07:09.639758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.639784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.650843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3498 00:39:16.809 [2024-06-11 14:07:09.652053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.652079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.663008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4578 00:39:16.809 [2024-06-11 14:07:09.664198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.664224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.675175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5658 00:39:16.809 [2024-06-11 14:07:09.676259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.676284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.687317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190dfdc0 00:39:16.809 [2024-06-11 14:07:09.688382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.699454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e0ea0 00:39:16.809 [2024-06-11 14:07:09.700639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.700664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:16.809 [2024-06-11 14:07:09.711588] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fd208 00:39:16.809 [2024-06-11 14:07:09.712654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:16.809 [2024-06-11 14:07:09.712679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.723734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6458 00:39:17.069 [2024-06-11 14:07:09.724833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.724859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.735897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190eee38 00:39:17.069 [2024-06-11 14:07:09.736966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.736992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.748085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f8e88 00:39:17.069 [2024-06-11 14:07:09.749164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.749190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.760242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f7da8 00:39:17.069 [2024-06-11 14:07:09.761313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.761338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.772405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6b70 00:39:17.069 [2024-06-11 14:07:09.773482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.773509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.784582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7c50 00:39:17.069 [2024-06-11 14:07:09.785651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.785676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.796734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de470 00:39:17.069 [2024-06-11 14:07:09.797805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.797831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.808892] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5a90 00:39:17.069 [2024-06-11 14:07:09.809983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.810009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.821051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ea248 00:39:17.069 [2024-06-11 14:07:09.822149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.822175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.833216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9168 00:39:17.069 [2024-06-11 14:07:09.834316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.834349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.845377] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e27f0 00:39:17.069 [2024-06-11 14:07:09.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.846492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.857536] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e38d0 00:39:17.069 [2024-06-11 14:07:09.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.858660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.869705] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e49b0 00:39:17.069 [2024-06-11 14:07:09.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.870796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.881862] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df988 00:39:17.069 [2024-06-11 14:07:09.882928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.882954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.894034] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e0a68 00:39:17.069 [2024-06-11 14:07:09.895104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.069 [2024-06-11 14:07:09.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.069 [2024-06-11 14:07:09.906216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e1b48 00:39:17.069 [2024-06-11 14:07:09.907286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.907312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.070 [2024-06-11 14:07:09.918375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6890 00:39:17.070 [2024-06-11 14:07:09.919466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.919498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.070 [2024-06-11 14:07:09.930531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ef270 00:39:17.070 [2024-06-11 14:07:09.931593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.931618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.070 [2024-06-11 14:07:09.942668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ee190 00:39:17.070 [2024-06-11 14:07:09.943737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.943763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.070 [2024-06-11 14:07:09.955098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:17.070 [2024-06-11 14:07:09.956160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.956186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.070 [2024-06-11 14:07:09.967235] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6738 00:39:17.070 [2024-06-11 14:07:09.968302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.070 [2024-06-11 14:07:09.968328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:09.979386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7818 00:39:17.329 [2024-06-11 14:07:09.980449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:09.980479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:09.991540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df118 00:39:17.329 [2024-06-11 14:07:09.992603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:09.992629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.003688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de038 00:39:17.329 [2024-06-11 14:07:10.004855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.004881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.015833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ea680 00:39:17.329 [2024-06-11 14:07:10.017016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.017042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.027970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e95a0 00:39:17.329 [2024-06-11 14:07:10.029117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.029143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.041261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e84c0 00:39:17.329 [2024-06-11 14:07:10.042342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.042382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.053405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e3498 00:39:17.329 [2024-06-11 14:07:10.054485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.054522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.065571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e4578 00:39:17.329 [2024-06-11 14:07:10.066644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.066681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.077712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5658 00:39:17.329 [2024-06-11 14:07:10.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.078822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.089870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190dfdc0 00:39:17.329 [2024-06-11 14:07:10.090946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.090982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.102176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e0ea0 00:39:17.329 [2024-06-11 14:07:10.103253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.103288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.114334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190fd208 00:39:17.329 [2024-06-11 14:07:10.115419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.115460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.126493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6458 00:39:17.329 [2024-06-11 14:07:10.127563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.127602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.138680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190eee38 00:39:17.329 [2024-06-11 14:07:10.139758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.139788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.150858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f8e88 00:39:17.329 [2024-06-11 14:07:10.151923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.151954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.163031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f7da8 00:39:17.329 [2024-06-11 14:07:10.164094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.164120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.175177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e6b70 00:39:17.329 [2024-06-11 14:07:10.176344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.176370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.187323] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e7c50 00:39:17.329 [2024-06-11 14:07:10.188487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.188513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.199444] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190de470 00:39:17.329 [2024-06-11 14:07:10.200604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.200630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.211587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e5a90 00:39:17.329 [2024-06-11 14:07:10.212746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.212771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.223722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ea248 00:39:17.329 [2024-06-11 14:07:10.224888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.224914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.329 [2024-06-11 14:07:10.235858] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e9168 00:39:17.329 [2024-06-11 14:07:10.237025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.329 [2024-06-11 14:07:10.237050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.588 [2024-06-11 14:07:10.248007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e27f0 00:39:17.588 [2024-06-11 14:07:10.249172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.588 [2024-06-11 14:07:10.249198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.588 [2024-06-11 14:07:10.260138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e38d0 00:39:17.588 [2024-06-11 14:07:10.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.588 [2024-06-11 14:07:10.261320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.588 [2024-06-11 14:07:10.272271] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e49b0 00:39:17.588 [2024-06-11 14:07:10.273440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.588 [2024-06-11 14:07:10.273466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.284422] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190df988 00:39:17.589 [2024-06-11 14:07:10.285589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.296564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e0a68 00:39:17.589 [2024-06-11 14:07:10.297729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.297754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.308681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190e1b48 00:39:17.589 [2024-06-11 14:07:10.309846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.309872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.320786] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f6890 00:39:17.589 [2024-06-11 14:07:10.321948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.332901] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ef270 00:39:17.589 [2024-06-11 14:07:10.333963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.333989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.345029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190ee190 00:39:17.589 [2024-06-11 14:07:10.346090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.346116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 [2024-06-11 14:07:10.357151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f98550) with pdu=0x2000190f81e0 00:39:17.589 [2024-06-11 14:07:10.358295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:17.589 [2024-06-11 14:07:10.358320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:17.589 00:39:17.589 Latency(us) 00:39:17.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.589 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:17.589 nvme0n1 : 2.00 20999.09 82.03 0.00 0.00 6085.41 2451.05 11953.77 00:39:17.589 =================================================================================================================== 00:39:17.589 Total : 20999.09 82.03 0.00 0.00 6085.41 2451.05 11953.77 00:39:17.589 0 00:39:17.589 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:17.589 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:17.589 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:17.589 | .driver_specific 00:39:17.589 | .nvme_error 00:39:17.589 | .status_code 00:39:17.589 | .command_transient_transport_error' 00:39:17.589 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:17.847 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1664629 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1664629 ']' 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1664629 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1664629 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1664629' 00:39:17.848 killing process with pid 1664629 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1664629 00:39:17.848 Received shutdown signal, test time was about 2.000000 seconds 00:39:17.848 00:39:17.848 Latency(us) 00:39:17.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.848 =================================================================================================================== 00:39:17.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:17.848 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1664629 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1665184 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1665184 /var/tmp/bperf.sock 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1665184 ']' 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:18.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:18.107 14:07:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:18.107 [2024-06-11 14:07:10.915415] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:18.107 [2024-06-11 14:07:10.915489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665184 ] 00:39:18.107 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:18.107 Zero copy mechanism will not be used. 00:39:18.107 EAL: No free 2048 kB hugepages reported on node 1 00:39:18.107 [2024-06-11 14:07:11.007778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.399 [2024-06-11 14:07:11.093956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.968 14:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:18.968 14:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:18.968 14:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:18.968 14:07:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:19.227 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:19.795 nvme0n1 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:19.795 14:07:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:19.795 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:19.795 Zero copy mechanism will not be used. 00:39:19.795 Running I/O for 2 seconds... 00:39:19.795 [2024-06-11 14:07:12.560783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.561257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.572164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.572614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.572646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.580966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.581398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.581426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.589304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.589712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.589741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.598278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.598726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.598754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.606606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.795 [2024-06-11 14:07:12.607031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.795 [2024-06-11 14:07:12.607058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.795 [2024-06-11 14:07:12.615050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.615471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.615504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.624603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.625041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.625067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.633707] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.634134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.634161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.641507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.641940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.641967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.649113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.649527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.649553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.656760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.657181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.657208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.665460] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.665884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.675425] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.675865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.675892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.684760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.684950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.694257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.694502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:19.796 [2024-06-11 14:07:12.702818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:19.796 [2024-06-11 14:07:12.703230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:19.796 [2024-06-11 14:07:12.703256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.711849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.712289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.712315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.720823] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.721270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.730131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.730550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.730577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.737519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.737930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.744260] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.744676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.744703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.750923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.751348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.751375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.758563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.758980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.767215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.767643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.767669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.776834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.777258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.777284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.785742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.786189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.786220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.793968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.794392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.794418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.801708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.808381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.808814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.808840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.816224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.816660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.816686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.823718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.824151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.824178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.831362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.831809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.831835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.839013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.839459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.839491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.846963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.847389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.847415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.855820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.856248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.856274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.863993] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.864434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.056 [2024-06-11 14:07:12.864460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.056 [2024-06-11 14:07:12.871880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.056 [2024-06-11 14:07:12.872305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.872331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.879519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.879949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.879976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.887702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.888125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.888152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.895570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.895992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.896018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.902390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.902827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.902854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.909052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.909468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.909500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.915448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.915881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.915907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.921813] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.922240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.922267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.928785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.929205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.929231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.935846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.936261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.936288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.942537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.942965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.942992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.949224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.949660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.949686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.057 [2024-06-11 14:07:12.957809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.057 [2024-06-11 14:07:12.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.057 [2024-06-11 14:07:12.958267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:12.965669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:12.966095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:12.966120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:12.972254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:12.972686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:12.972713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:12.978960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:12.979386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:12.979417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:12.985992] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:12.986415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:12.986441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:12.992950] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:12.993375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:12.993401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.000861] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.008347] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.008781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.008807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.014702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.015118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.015144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.021160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.021583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.021609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.027981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.028410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.028436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.036114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.036543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.036569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.043939] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.044374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.052075] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.052492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.052518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.061191] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.061635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.061661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.070322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.070758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.079259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.079717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.079743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.088116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.088574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.088600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.096864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.097278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.097304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.105933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.106363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.106388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.114488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.319 [2024-06-11 14:07:13.114915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.319 [2024-06-11 14:07:13.114941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.319 [2024-06-11 14:07:13.123418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.123840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.123866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.132219] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.132646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.140758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.141172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.141198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.149326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.149766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.149792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.157709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.158129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.158155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.166368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.166809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.166836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.174752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.175167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.175193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.183071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.183506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.183532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.192192] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.192612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.192642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.199986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.200397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.200422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.206998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.207420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.207447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.213878] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.214293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.214318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.320 [2024-06-11 14:07:13.221038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.320 [2024-06-11 14:07:13.221459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.320 [2024-06-11 14:07:13.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.228990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.229422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.229447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.236962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.237408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.244156] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.244599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.244626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.252029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.252444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.252470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.259870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.260301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.260327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.267734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.268141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.268168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.275713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.276155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.283975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.293655] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.294095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.294121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.580 [2024-06-11 14:07:13.303494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.580 [2024-06-11 14:07:13.303934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.580 [2024-06-11 14:07:13.303960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.311989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.312415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.312441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.319974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.320408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.320433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.327212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.327637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.327668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.334666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.335088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.335114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.344174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.344613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.344640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.353642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.354081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.363530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.373630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.374071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.374097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.384056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.384492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.384518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.393886] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.394329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.394355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.403789] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.404231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.404258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.413513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.413942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.413968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.423070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.423517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.423543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.432561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.432982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.433008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.440996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.441428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.441454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.450282] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.450515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.459882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.460301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.460327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.468007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.468420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.468446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.475782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.476201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.476227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.581 [2024-06-11 14:07:13.483801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.581 [2024-06-11 14:07:13.484220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.581 [2024-06-11 14:07:13.484246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.491973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.492417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.492443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.499368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.499793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.499819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.506491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.506908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.506933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.514045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.514472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.514503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.522306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.522747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.522774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.529513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.529924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.529951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.536012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.536429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.536455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.542399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.542842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.548797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.549219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.555950] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.556382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.556408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.562596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.563022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.563047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.570704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.571144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.578205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.578629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.578654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.841 [2024-06-11 14:07:13.586155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.841 [2024-06-11 14:07:13.586583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.841 [2024-06-11 14:07:13.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.594205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.594640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.594666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.602321] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.602756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.602782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.610857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.611287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.611313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.620035] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.620482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.620508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.628801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.629219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.629245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.637378] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.637817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.637843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.644867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.645284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.645309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.652299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.652752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.652779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.658936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.659359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.665780] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.666199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.666225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.673311] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.673728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.673754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.680875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.681303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.681327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.689133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.689253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.689277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.697876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.698291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.698317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.706608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.707025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.707050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.715172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.715598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.715624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.722585] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.722999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.723025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.731811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.732226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.732253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:20.842 [2024-06-11 14:07:13.741073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:20.842 [2024-06-11 14:07:13.741508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:20.842 [2024-06-11 14:07:13.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.102 [2024-06-11 14:07:13.750344] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.102 [2024-06-11 14:07:13.750792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.102 [2024-06-11 14:07:13.750818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.102 [2024-06-11 14:07:13.760658] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.102 [2024-06-11 14:07:13.761093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.102 [2024-06-11 14:07:13.761122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.102 [2024-06-11 14:07:13.770385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.102 [2024-06-11 14:07:13.770812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.102 [2024-06-11 14:07:13.770838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.102 [2024-06-11 14:07:13.780349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.102 [2024-06-11 14:07:13.780793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.780819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.789869] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.790007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.790031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.799288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.799726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.799751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.809172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.809607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.809633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.818951] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.819391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.819417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.828956] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.829398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.829425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.838971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.839407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.848540] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.848984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.849010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.858679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.859121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.859146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.868504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.868643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.868667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.877571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.878002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.878027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.887338] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.887785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.887812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.896820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.896983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.897008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.906451] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.906891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.906917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.915577] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.916018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.916044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.924663] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.925099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.925125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.934407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.934846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.934872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.943558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.943985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.944011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.951621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.952052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.952078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.958836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.959258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.959284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.966465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.966925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.966950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.974472] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.974929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.974955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.983865] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.984074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.984098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.991829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.992249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.992275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:13.999442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:13.999550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:13.999575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.103 [2024-06-11 14:07:14.007018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.103 [2024-06-11 14:07:14.007453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.103 [2024-06-11 14:07:14.007485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.013837] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.014245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.020786] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.021210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.021236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.028146] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.028582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.028608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.035461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.035892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.035918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.042546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.042972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.042997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.049239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.049650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.049677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.055714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.056125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.056151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.062407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.062837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.062864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.068785] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.364 [2024-06-11 14:07:14.069210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.364 [2024-06-11 14:07:14.069236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.364 [2024-06-11 14:07:14.075095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.075516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.075541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.081296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.081709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.081735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.088086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.088522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.088547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.095256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.095684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.095710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.101879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.102300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.102326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.108716] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.109139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.109165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.116216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.116638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.116668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.124067] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.124504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.124530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.131286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.131713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.131738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.138006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.138419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.138445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.144585] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.145005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.145030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.151795] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.152220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.152245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.160226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.160672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.160698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.169493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.169921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.169947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.178019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.178495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.187080] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.187528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.187554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.196174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.196601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.196627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.205339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.205760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.205786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.215602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.216047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.216074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.224742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.225182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.225208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.234406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.234823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.234850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.243607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.244036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.244062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.253438] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.253889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.253915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.365 [2024-06-11 14:07:14.263561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.365 [2024-06-11 14:07:14.263778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.365 [2024-06-11 14:07:14.263803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.273334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.273786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.273813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.282988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.283429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.283455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.291304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.291731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.291757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.298246] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.298683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.305564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.306011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.306036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.313098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.313525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.313552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.321300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.321725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.321750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.330558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.330987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.331014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.339859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.340288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.340318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.349380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.349829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.349855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.358750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.359174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.359200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.367408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.367857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.367883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.375998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.376261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.376287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.384174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.384685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.384711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.392273] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.392736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.392762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.400688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.401180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.401206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.408952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.409414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.409441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.416368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.416783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.416809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.424264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.424722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.424749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.432341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.432748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.432774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.439346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.439828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.439854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.446045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.446465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.452417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.452836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.452862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.458799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.459201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.626 [2024-06-11 14:07:14.459228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.626 [2024-06-11 14:07:14.465553] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.626 [2024-06-11 14:07:14.465946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.465972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.472016] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.472422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.478247] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.478654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.478681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.484582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.484980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.485005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.490983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.491376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.491402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.497866] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.498288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.498314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.505078] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.505489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.505515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.511211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.511616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.511642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.517889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.518301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.518327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.525116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.525649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:21.627 [2024-06-11 14:07:14.533303] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.627 [2024-06-11 14:07:14.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.627 [2024-06-11 14:07:14.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:21.887 [2024-06-11 14:07:14.540508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.887 [2024-06-11 14:07:14.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.887 [2024-06-11 14:07:14.541006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:21.887 [2024-06-11 14:07:14.548891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x208d560) with pdu=0x2000190fef90 00:39:21.887 [2024-06-11 14:07:14.549412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.887 [2024-06-11 14:07:14.549438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:21.887 00:39:21.887 Latency(us) 00:39:21.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.887 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:21.887 nvme0n1 : 2.00 3796.57 474.57 0.00 0.00 4206.60 2778.73 12582.91 00:39:21.887 =================================================================================================================== 00:39:21.887 Total : 3796.57 474.57 0.00 0.00 4206.60 2778.73 12582.91 00:39:21.887 0 00:39:21.887 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:21.887 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:21.887 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:21.887 | .driver_specific 00:39:21.887 | .nvme_error 00:39:21.887 | .status_code 00:39:21.887 | .command_transient_transport_error' 00:39:21.887 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 245 > 0 )) 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1665184 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1665184 ']' 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1665184 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1665184 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1665184' 00:39:22.146 killing process with pid 1665184 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1665184 00:39:22.146 Received shutdown signal, test time was about 2.000000 seconds 00:39:22.146 00:39:22.146 Latency(us) 00:39:22.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.146 =================================================================================================================== 00:39:22.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:22.146 14:07:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1665184 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1663100 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1663100 ']' 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1663100 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1663100 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1663100' 00:39:22.406 killing process with pid 1663100 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1663100 00:39:22.406 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1663100 00:39:22.667 00:39:22.667 real 0m17.318s 00:39:22.667 user 0m33.600s 00:39:22.667 sys 0m4.951s 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:22.667 ************************************ 00:39:22.667 END TEST nvmf_digest_error 00:39:22.667 ************************************ 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:22.667 rmmod nvme_tcp 00:39:22.667 rmmod nvme_fabrics 00:39:22.667 rmmod nvme_keyring 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1663100 ']' 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1663100 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1663100 ']' 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1663100 00:39:22.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1663100) - No such process 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1663100 is not found' 00:39:22.667 Process with pid 1663100 is not found 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:22.667 14:07:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.203 14:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:25.203 00:39:25.203 real 0m44.138s 00:39:25.203 user 1m9.106s 00:39:25.203 sys 0m15.457s 00:39:25.203 14:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:25.203 14:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:25.203 ************************************ 00:39:25.203 END TEST nvmf_digest 00:39:25.203 ************************************ 00:39:25.203 14:07:17 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:39:25.203 14:07:17 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:39:25.203 14:07:17 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:39:25.203 14:07:17 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:25.203 14:07:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:25.203 14:07:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:25.203 14:07:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:25.203 ************************************ 00:39:25.203 START TEST nvmf_bdevperf 00:39:25.203 ************************************ 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:25.203 * Looking for test storage... 00:39:25.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:25.203 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:39:25.204 14:07:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:31.778 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:31.778 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:31.778 Found net devices under 0000:af:00.0: cvl_0_0 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:31.778 Found net devices under 0000:af:00.1: cvl_0_1 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:31.778 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:31.779 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.038 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.038 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:32.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:39:32.039 00:39:32.039 --- 10.0.0.2 ping statistics --- 00:39:32.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.039 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:39:32.039 00:39:32.039 --- 10.0.0.1 ping statistics --- 00:39:32.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.039 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1669682 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1669682 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1669682 ']' 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:32.039 14:07:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.039 [2024-06-11 14:07:24.842612] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:32.039 [2024-06-11 14:07:24.842673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.039 EAL: No free 2048 kB hugepages reported on node 1 00:39:32.039 [2024-06-11 14:07:24.941572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:32.298 [2024-06-11 14:07:25.029985] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.298 [2024-06-11 14:07:25.030026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.298 [2024-06-11 14:07:25.030040] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.298 [2024-06-11 14:07:25.030052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.298 [2024-06-11 14:07:25.030062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.298 [2024-06-11 14:07:25.030168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:32.298 [2024-06-11 14:07:25.030278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.298 [2024-06-11 14:07:25.030278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.948 [2024-06-11 14:07:25.806051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.948 Malloc0 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:32.948 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:33.208 [2024-06-11 14:07:25.869028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:33.208 { 00:39:33.208 "params": { 00:39:33.208 "name": "Nvme$subsystem", 00:39:33.208 "trtype": "$TEST_TRANSPORT", 00:39:33.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:33.208 "adrfam": "ipv4", 00:39:33.208 "trsvcid": "$NVMF_PORT", 00:39:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:33.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:33.208 "hdgst": ${hdgst:-false}, 00:39:33.208 "ddgst": ${ddgst:-false} 00:39:33.208 }, 00:39:33.208 "method": "bdev_nvme_attach_controller" 00:39:33.208 } 00:39:33.208 EOF 00:39:33.208 )") 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:33.208 14:07:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:33.208 "params": { 00:39:33.208 "name": "Nvme1", 00:39:33.208 "trtype": "tcp", 00:39:33.208 "traddr": "10.0.0.2", 00:39:33.208 "adrfam": "ipv4", 00:39:33.208 "trsvcid": "4420", 00:39:33.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:33.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:33.208 "hdgst": false, 00:39:33.208 "ddgst": false 00:39:33.208 }, 00:39:33.208 "method": "bdev_nvme_attach_controller" 00:39:33.208 }' 00:39:33.208 [2024-06-11 14:07:25.923737] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:33.208 [2024-06-11 14:07:25.923802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669760 ] 00:39:33.208 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.208 [2024-06-11 14:07:26.024751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.208 [2024-06-11 14:07:26.105857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.467 Running I/O for 1 seconds... 00:39:34.405 00:39:34.405 Latency(us) 00:39:34.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:34.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:34.405 Verification LBA range: start 0x0 length 0x4000 00:39:34.405 Nvme1n1 : 1.00 8512.04 33.25 0.00 0.00 14974.80 2123.37 15938.36 00:39:34.405 =================================================================================================================== 00:39:34.405 Total : 8512.04 33.25 0.00 0.00 14974.80 2123.37 15938.36 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1670003 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:34.664 { 00:39:34.664 "params": { 00:39:34.664 "name": "Nvme$subsystem", 00:39:34.664 "trtype": "$TEST_TRANSPORT", 00:39:34.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:34.664 "adrfam": "ipv4", 00:39:34.664 "trsvcid": "$NVMF_PORT", 00:39:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:34.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:34.664 "hdgst": ${hdgst:-false}, 00:39:34.664 "ddgst": ${ddgst:-false} 00:39:34.664 }, 00:39:34.664 "method": "bdev_nvme_attach_controller" 00:39:34.664 } 00:39:34.664 EOF 00:39:34.664 )") 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:39:34.664 14:07:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:34.664 "params": { 00:39:34.664 "name": "Nvme1", 00:39:34.664 "trtype": "tcp", 00:39:34.664 "traddr": "10.0.0.2", 00:39:34.664 "adrfam": "ipv4", 00:39:34.664 "trsvcid": "4420", 00:39:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:34.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:34.664 "hdgst": false, 00:39:34.664 "ddgst": false 00:39:34.664 }, 00:39:34.664 "method": "bdev_nvme_attach_controller" 00:39:34.664 }' 00:39:34.664 [2024-06-11 14:07:27.521732] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:34.664 [2024-06-11 14:07:27.521802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670003 ] 00:39:34.664 EAL: No free 2048 kB hugepages reported on node 1 00:39:34.923 [2024-06-11 14:07:27.621994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.923 [2024-06-11 14:07:27.702300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.182 Running I/O for 15 seconds... 00:39:37.719 14:07:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1669682 00:39:37.719 14:07:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:39:37.719 [2024-06-11 14:07:30.491877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.491921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.491969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.491988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:37.719 [2024-06-11 14:07:30.492415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.719 [2024-06-11 14:07:30.492614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.719 [2024-06-11 14:07:30.492628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.492976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.492990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.720 [2024-06-11 14:07:30.493648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.720 [2024-06-11 14:07:30.493663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.493979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.493994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.721 [2024-06-11 14:07:30.494703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.721 [2024-06-11 14:07:30.494718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.494982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.494997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.722 [2024-06-11 14:07:30.495512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8360 is same with the state(5) to be set 00:39:37.722 [2024-06-11 14:07:30.495542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:37.722 [2024-06-11 14:07:30.495553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:37.722 [2024-06-11 14:07:30.495564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38448 len:8 PRP1 0x0 PRP2 0x0 00:39:37.722 [2024-06-11 14:07:30.495577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:37.722 [2024-06-11 14:07:30.495630] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb8360 was disconnected and freed. reset controller. 00:39:37.722 [2024-06-11 14:07:30.499401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.722 [2024-06-11 14:07:30.499462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.722 [2024-06-11 14:07:30.500120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.722 [2024-06-11 14:07:30.500143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.722 [2024-06-11 14:07:30.500156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.722 [2024-06-11 14:07:30.500393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.722 [2024-06-11 14:07:30.500637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.722 [2024-06-11 14:07:30.500652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.722 [2024-06-11 14:07:30.500666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.722 [2024-06-11 14:07:30.504406] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.722 [2024-06-11 14:07:30.513892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.722 [2024-06-11 14:07:30.514454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.722 [2024-06-11 14:07:30.514522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.722 [2024-06-11 14:07:30.514555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.515144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.515746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.515762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.515775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.519513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.527900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.528495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.528549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.528582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.529000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.529239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.529258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.529271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.533018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.542056] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.542663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.542717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.542750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.543339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.543686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.543702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.543715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.547444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.556258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.556809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.556833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.556846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.557084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.557322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.557337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.557350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.561090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.570346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.570939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.570991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.571023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.571592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.571830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.571846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.571858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.575597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.584423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.584929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.584981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.585013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.585619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.586133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.586148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.586161] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.589900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.598499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.599007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.599058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.599090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.599676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.599916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.599931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.599943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.603679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.723 [2024-06-11 14:07:30.612714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.723 [2024-06-11 14:07:30.613287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.723 [2024-06-11 14:07:30.613310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.723 [2024-06-11 14:07:30.613324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.723 [2024-06-11 14:07:30.613568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.723 [2024-06-11 14:07:30.613807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.723 [2024-06-11 14:07:30.613822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.723 [2024-06-11 14:07:30.613834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.723 [2024-06-11 14:07:30.617573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.984 [2024-06-11 14:07:30.626844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.627415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.627438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.627452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.627700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.627939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.627954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.627966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.631711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.640965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.641392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.641415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.641429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.641674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.641914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.641929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.641941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.645677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.655151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.655739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.655790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.655822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.656326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.656573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.656589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.656602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.660335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.669364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.669937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.669961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.669974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.670212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.670451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.670466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.670489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.674226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.683485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.684068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.684120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.684151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.684756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.685151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.685167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.685179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.688915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.697512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.698063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.698114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.698146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.698749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.699031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.699046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.699059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.702797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.711620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.712189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.712241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.712274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.712878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.713146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.713162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.713174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.716914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.725733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.726311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.726370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.726402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.726941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.727181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.727195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.727208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.730949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.739756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.740285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.740309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.740322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.740566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.740805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.740820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.740833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.985 [2024-06-11 14:07:30.744572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.985 [2024-06-11 14:07:30.753835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.985 [2024-06-11 14:07:30.754406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.985 [2024-06-11 14:07:30.754430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.985 [2024-06-11 14:07:30.754443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.985 [2024-06-11 14:07:30.754687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.985 [2024-06-11 14:07:30.754926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.985 [2024-06-11 14:07:30.754941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.985 [2024-06-11 14:07:30.754955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.758698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.767965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.768554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.768608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.768641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.769159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.769401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.769417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.769430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.773172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.781997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.782439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.782463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.782482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.782721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.782960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.782975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.782988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.786717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.796005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.796535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.796558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.796571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.796808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.797047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.797061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.797074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.800817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.810077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.810621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.810645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.810658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.810895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.811134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.811149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.811162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.814909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.824165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.824666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.824690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.824703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.824940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.825180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.825195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.825208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.828962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.838241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.838760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.838813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.838845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.839316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.839561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.839577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.839590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.843329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.852372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.852809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.852832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.852846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.853082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.853321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.853336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.853349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.857090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.866573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.867147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.867198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.867239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.867639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.867878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.867893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.867906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.871646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:37.986 [2024-06-11 14:07:30.880693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:37.986 [2024-06-11 14:07:30.881233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:37.986 [2024-06-11 14:07:30.881257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:37.986 [2024-06-11 14:07:30.881270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:37.986 [2024-06-11 14:07:30.881513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:37.986 [2024-06-11 14:07:30.881751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:37.986 [2024-06-11 14:07:30.881766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:37.986 [2024-06-11 14:07:30.881779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:37.986 [2024-06-11 14:07:30.885519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.247 [2024-06-11 14:07:30.894778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.247 [2024-06-11 14:07:30.895335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.247 [2024-06-11 14:07:30.895358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.247 [2024-06-11 14:07:30.895371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.247 [2024-06-11 14:07:30.895614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.247 [2024-06-11 14:07:30.895853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.247 [2024-06-11 14:07:30.895868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.247 [2024-06-11 14:07:30.895881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.247 [2024-06-11 14:07:30.899623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.247 [2024-06-11 14:07:30.908891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.247 [2024-06-11 14:07:30.909336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.247 [2024-06-11 14:07:30.909359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.247 [2024-06-11 14:07:30.909373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.247 [2024-06-11 14:07:30.909615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.247 [2024-06-11 14:07:30.909854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.247 [2024-06-11 14:07:30.909873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.247 [2024-06-11 14:07:30.909886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.247 [2024-06-11 14:07:30.913629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.247 [2024-06-11 14:07:30.923100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.247 [2024-06-11 14:07:30.923591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.247 [2024-06-11 14:07:30.923643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.247 [2024-06-11 14:07:30.923676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.247 [2024-06-11 14:07:30.924267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.247 [2024-06-11 14:07:30.924764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.247 [2024-06-11 14:07:30.924780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.247 [2024-06-11 14:07:30.924792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.247 [2024-06-11 14:07:30.928531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.247 [2024-06-11 14:07:30.937133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.247 [2024-06-11 14:07:30.937629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.247 [2024-06-11 14:07:30.937653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.247 [2024-06-11 14:07:30.937666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.247 [2024-06-11 14:07:30.937903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.247 [2024-06-11 14:07:30.938142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:30.938157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:30.938170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:30.941909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:30.951456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:30.952026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:30.952080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:30.952114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:30.952572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:30.952812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:30.952827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:30.952840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:30.956584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:30.965628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:30.966107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:30.966130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:30.966144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:30.966380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:30.966626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:30.966642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:30.966654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:30.970388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:30.979647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:30.980126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:30.980150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:30.980163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:30.980400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:30.980647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:30.980663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:30.980676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:30.984414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:30.993686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:30.994194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:30.994218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:30.994231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:30.994468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:30.994715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:30.994730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:30.994742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:30.998483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.007751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.008328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.008382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.008415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.008948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.009189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.009204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.009216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:31.012959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.021782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.022273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.022297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.022310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.022553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.022793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.022808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.022821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:31.026561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.035826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.036298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.036321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.036334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.036579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.036818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.036834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.036847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:31.040590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.049851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.050442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.050504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.050537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.051056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.051295] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.051310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.051326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:31.055068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.063890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.064436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.064459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.064472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.064714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.064953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.064968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.064980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.248 [2024-06-11 14:07:31.068725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.248 [2024-06-11 14:07:31.077980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.248 [2024-06-11 14:07:31.078498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.248 [2024-06-11 14:07:31.078550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.248 [2024-06-11 14:07:31.078582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.248 [2024-06-11 14:07:31.079068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.248 [2024-06-11 14:07:31.079308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.248 [2024-06-11 14:07:31.079323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.248 [2024-06-11 14:07:31.079336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.083074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.249 [2024-06-11 14:07:31.092112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.249 [2024-06-11 14:07:31.092660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.249 [2024-06-11 14:07:31.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.249 [2024-06-11 14:07:31.092698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.249 [2024-06-11 14:07:31.092935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.249 [2024-06-11 14:07:31.093173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.249 [2024-06-11 14:07:31.093188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.249 [2024-06-11 14:07:31.093200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.096938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.249 [2024-06-11 14:07:31.106200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.249 [2024-06-11 14:07:31.106679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.249 [2024-06-11 14:07:31.106706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.249 [2024-06-11 14:07:31.106720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.249 [2024-06-11 14:07:31.106956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.249 [2024-06-11 14:07:31.107194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.249 [2024-06-11 14:07:31.107209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.249 [2024-06-11 14:07:31.107221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.110962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.249 [2024-06-11 14:07:31.120221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.249 [2024-06-11 14:07:31.120705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.249 [2024-06-11 14:07:31.120729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.249 [2024-06-11 14:07:31.120742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.249 [2024-06-11 14:07:31.120980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.249 [2024-06-11 14:07:31.121218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.249 [2024-06-11 14:07:31.121234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.249 [2024-06-11 14:07:31.121246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.124986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.249 [2024-06-11 14:07:31.134263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.249 [2024-06-11 14:07:31.134816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.249 [2024-06-11 14:07:31.134869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.249 [2024-06-11 14:07:31.134901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.249 [2024-06-11 14:07:31.135408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.249 [2024-06-11 14:07:31.135652] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.249 [2024-06-11 14:07:31.135668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.249 [2024-06-11 14:07:31.135681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.139416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.249 [2024-06-11 14:07:31.148459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.249 [2024-06-11 14:07:31.149034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.249 [2024-06-11 14:07:31.149087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.249 [2024-06-11 14:07:31.149119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.249 [2024-06-11 14:07:31.149720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.249 [2024-06-11 14:07:31.150285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.249 [2024-06-11 14:07:31.150300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.249 [2024-06-11 14:07:31.150313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.249 [2024-06-11 14:07:31.154054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.510 [2024-06-11 14:07:31.162660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.510 [2024-06-11 14:07:31.163148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.510 [2024-06-11 14:07:31.163199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.510 [2024-06-11 14:07:31.163231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.510 [2024-06-11 14:07:31.163834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.510 [2024-06-11 14:07:31.164341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.510 [2024-06-11 14:07:31.164356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.510 [2024-06-11 14:07:31.164368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.510 [2024-06-11 14:07:31.168106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.510 [2024-06-11 14:07:31.176707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.510 [2024-06-11 14:07:31.177188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.510 [2024-06-11 14:07:31.177212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.510 [2024-06-11 14:07:31.177226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.510 [2024-06-11 14:07:31.177464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.510 [2024-06-11 14:07:31.177710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.510 [2024-06-11 14:07:31.177726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.510 [2024-06-11 14:07:31.177739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.510 [2024-06-11 14:07:31.181468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.510 [2024-06-11 14:07:31.190734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.510 [2024-06-11 14:07:31.191292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.510 [2024-06-11 14:07:31.191343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.510 [2024-06-11 14:07:31.191375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.510 [2024-06-11 14:07:31.191981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.510 [2024-06-11 14:07:31.192444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.510 [2024-06-11 14:07:31.192460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.510 [2024-06-11 14:07:31.192473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.510 [2024-06-11 14:07:31.196213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.510 [2024-06-11 14:07:31.204812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.510 [2024-06-11 14:07:31.205368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.510 [2024-06-11 14:07:31.205420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.510 [2024-06-11 14:07:31.205452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.510 [2024-06-11 14:07:31.206056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.510 [2024-06-11 14:07:31.206296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.510 [2024-06-11 14:07:31.206311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.510 [2024-06-11 14:07:31.206323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.510 [2024-06-11 14:07:31.210053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.510 [2024-06-11 14:07:31.218878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.510 [2024-06-11 14:07:31.219433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.510 [2024-06-11 14:07:31.219494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.510 [2024-06-11 14:07:31.219528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.510 [2024-06-11 14:07:31.220116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.510 [2024-06-11 14:07:31.220458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.510 [2024-06-11 14:07:31.220490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.220511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.226755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.233814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.234313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.234338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.234352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.234617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.234877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.234894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.234908] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.238967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.247910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.248455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.248484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.248502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.248740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.248980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.248995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.249007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.252746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.262026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.262451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.262481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.262496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.262733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.262971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.262986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.262999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.266739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.276219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.276774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.276826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.276858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.277447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.277870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.277886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.277899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.281638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.290240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.290749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.290772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.290786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.291022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.291261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.291280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.291293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.295036] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.304293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.304773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.304810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.305049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.305287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.305302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.305314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.309057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.318321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.318907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.318958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.318991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.319567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.319807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.319822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.319835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.323579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.332420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.332935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.332987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.333019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.333500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.333739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.333754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.333767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.337512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.346574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.347054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.347077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.347090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.347327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.347575] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.347592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.347605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.351341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.511 [2024-06-11 14:07:31.360616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.511 [2024-06-11 14:07:31.361138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.511 [2024-06-11 14:07:31.361161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.511 [2024-06-11 14:07:31.361174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.511 [2024-06-11 14:07:31.361411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.511 [2024-06-11 14:07:31.361658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.511 [2024-06-11 14:07:31.361674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.511 [2024-06-11 14:07:31.361687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.511 [2024-06-11 14:07:31.365421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.512 [2024-06-11 14:07:31.374688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.512 [2024-06-11 14:07:31.375232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.512 [2024-06-11 14:07:31.375255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.512 [2024-06-11 14:07:31.375269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.512 [2024-06-11 14:07:31.375514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.512 [2024-06-11 14:07:31.375753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.512 [2024-06-11 14:07:31.375769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.512 [2024-06-11 14:07:31.375781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.512 [2024-06-11 14:07:31.379522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.512 [2024-06-11 14:07:31.388962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.512 [2024-06-11 14:07:31.389529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.512 [2024-06-11 14:07:31.389553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.512 [2024-06-11 14:07:31.389566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.512 [2024-06-11 14:07:31.389808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.512 [2024-06-11 14:07:31.390047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.512 [2024-06-11 14:07:31.390062] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.512 [2024-06-11 14:07:31.390074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.512 [2024-06-11 14:07:31.393817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.512 [2024-06-11 14:07:31.403071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.512 [2024-06-11 14:07:31.403656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.512 [2024-06-11 14:07:31.403711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.512 [2024-06-11 14:07:31.403743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.512 [2024-06-11 14:07:31.404181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.512 [2024-06-11 14:07:31.404419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.512 [2024-06-11 14:07:31.404434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.512 [2024-06-11 14:07:31.404447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.512 [2024-06-11 14:07:31.408188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.512 [2024-06-11 14:07:31.417227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.512 [2024-06-11 14:07:31.417799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.512 [2024-06-11 14:07:31.417822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.512 [2024-06-11 14:07:31.417835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.512 [2024-06-11 14:07:31.418071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.773 [2024-06-11 14:07:31.418310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.773 [2024-06-11 14:07:31.418325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.773 [2024-06-11 14:07:31.418338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.773 [2024-06-11 14:07:31.422075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.773 [2024-06-11 14:07:31.431330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.773 [2024-06-11 14:07:31.431815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.773 [2024-06-11 14:07:31.431839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.773 [2024-06-11 14:07:31.431853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.773 [2024-06-11 14:07:31.432089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.773 [2024-06-11 14:07:31.432327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.773 [2024-06-11 14:07:31.432343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.773 [2024-06-11 14:07:31.432359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.773 [2024-06-11 14:07:31.436096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.773 [2024-06-11 14:07:31.445352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.773 [2024-06-11 14:07:31.445852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.773 [2024-06-11 14:07:31.445875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.773 [2024-06-11 14:07:31.445888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.773 [2024-06-11 14:07:31.446125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.773 [2024-06-11 14:07:31.446363] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.773 [2024-06-11 14:07:31.446379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.773 [2024-06-11 14:07:31.446391] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.773 [2024-06-11 14:07:31.450131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.773 [2024-06-11 14:07:31.459389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.773 [2024-06-11 14:07:31.459964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.773 [2024-06-11 14:07:31.460016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.460049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.460651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.461120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.461136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.461148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.464887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.473489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.473991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.474014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.474028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.474264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.474509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.474525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.474538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.478265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.487512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.488056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.488083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.488096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.488334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.488578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.488594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.488607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.492337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.501595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.502038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.502062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.502075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.502313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.502556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.502572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.502585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.506320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.515717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.516167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.516191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.516204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.516442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.516687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.516703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.516716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.520450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.529941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.530517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.530569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.530601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.531189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.531493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.531510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.531523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.535256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.544076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.544649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.544674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.544687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.544925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.545163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.545178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.545191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.548930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.558187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.558644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.558668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.558681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.558919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.559156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.559172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.559184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.562922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.572189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.572707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.572760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.572793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.573326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.573568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.774 [2024-06-11 14:07:31.573584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.774 [2024-06-11 14:07:31.573597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.774 [2024-06-11 14:07:31.577334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.774 [2024-06-11 14:07:31.586370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.774 [2024-06-11 14:07:31.586867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.774 [2024-06-11 14:07:31.586891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.774 [2024-06-11 14:07:31.586904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.774 [2024-06-11 14:07:31.587140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.774 [2024-06-11 14:07:31.587378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.587393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.587406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.591163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.600423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.601020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.601044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.601057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.601295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.601540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.601555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.601568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.605303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.614571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.615002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.615025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.615038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.615276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.615521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.615537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.615549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.619277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.628761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.629319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.629371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.629412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.630029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.630466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.630485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.630498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.634232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.642833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.643220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.643271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.643303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.643913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.644462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.644481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.644494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.650295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.657929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.658500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.658525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.658539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.658798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.659057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.659074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.659087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.663156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:38.775 [2024-06-11 14:07:31.672128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:38.775 [2024-06-11 14:07:31.672684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:38.775 [2024-06-11 14:07:31.672707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:38.775 [2024-06-11 14:07:31.672720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:38.775 [2024-06-11 14:07:31.672958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:38.775 [2024-06-11 14:07:31.673196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:38.775 [2024-06-11 14:07:31.673215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:38.775 [2024-06-11 14:07:31.673228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:38.775 [2024-06-11 14:07:31.676966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.686237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.686810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.686833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.686846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.687083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.036 [2024-06-11 14:07:31.687321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.036 [2024-06-11 14:07:31.687336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.036 [2024-06-11 14:07:31.687348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.036 [2024-06-11 14:07:31.691088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.700344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.700907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.700930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.700943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.701180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.036 [2024-06-11 14:07:31.701418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.036 [2024-06-11 14:07:31.701433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.036 [2024-06-11 14:07:31.701445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.036 [2024-06-11 14:07:31.705181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.714429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.714930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.714953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.714966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.715203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.036 [2024-06-11 14:07:31.715441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.036 [2024-06-11 14:07:31.715456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.036 [2024-06-11 14:07:31.715468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.036 [2024-06-11 14:07:31.719208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.728473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.728955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.728978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.728991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.729229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.036 [2024-06-11 14:07:31.729468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.036 [2024-06-11 14:07:31.729497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.036 [2024-06-11 14:07:31.729511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.036 [2024-06-11 14:07:31.733240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.742500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.743046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.743069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.743083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.743320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.036 [2024-06-11 14:07:31.743564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.036 [2024-06-11 14:07:31.743580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.036 [2024-06-11 14:07:31.743593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.036 [2024-06-11 14:07:31.747327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.036 [2024-06-11 14:07:31.756584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.036 [2024-06-11 14:07:31.757150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.036 [2024-06-11 14:07:31.757173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.036 [2024-06-11 14:07:31.757186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.036 [2024-06-11 14:07:31.757422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.757668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.757684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.757696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.761424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.770688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.771255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.771278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.771292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.771541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.771781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.771796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.771808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.775544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.784797] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.785365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.785413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.785445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.786020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.786416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.786440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.786460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.792709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.799569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.800164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.800214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.800246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.800852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.801238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.801254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.801267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.805326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.813581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.814156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.814208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.814241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.814716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.814955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.814970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.814986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.818722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.827764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.828185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.828208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.828221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.828459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.828705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.828720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.828733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.832482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.841789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.842291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.842314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.842328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.842573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.842811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.842827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.842839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.846573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.855826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.856388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.856439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.856472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.856991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.857231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.857246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.857258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.863250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.870847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.871455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.871519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.871551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.872068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.872327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.872343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.872357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.876417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.884874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.885446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.885471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.885492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.885730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.885968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.885983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.037 [2024-06-11 14:07:31.885996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.037 [2024-06-11 14:07:31.889750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.037 [2024-06-11 14:07:31.899002] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.037 [2024-06-11 14:07:31.899568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.037 [2024-06-11 14:07:31.899592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.037 [2024-06-11 14:07:31.899605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.037 [2024-06-11 14:07:31.899842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.037 [2024-06-11 14:07:31.900082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.037 [2024-06-11 14:07:31.900097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.038 [2024-06-11 14:07:31.900110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.038 [2024-06-11 14:07:31.903848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.038 [2024-06-11 14:07:31.913098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.038 [2024-06-11 14:07:31.913676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.038 [2024-06-11 14:07:31.913728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.038 [2024-06-11 14:07:31.913759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.038 [2024-06-11 14:07:31.914189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.038 [2024-06-11 14:07:31.914432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.038 [2024-06-11 14:07:31.914447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.038 [2024-06-11 14:07:31.914460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.038 [2024-06-11 14:07:31.918202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.038 [2024-06-11 14:07:31.927235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.038 [2024-06-11 14:07:31.927809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.038 [2024-06-11 14:07:31.927861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.038 [2024-06-11 14:07:31.927893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.038 [2024-06-11 14:07:31.928406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.038 [2024-06-11 14:07:31.928653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.038 [2024-06-11 14:07:31.928668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.038 [2024-06-11 14:07:31.928681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.038 [2024-06-11 14:07:31.932426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.038 [2024-06-11 14:07:31.941239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.038 [2024-06-11 14:07:31.941810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.038 [2024-06-11 14:07:31.941833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.038 [2024-06-11 14:07:31.941846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.038 [2024-06-11 14:07:31.942082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.038 [2024-06-11 14:07:31.942320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.038 [2024-06-11 14:07:31.942335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.038 [2024-06-11 14:07:31.942348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.298 [2024-06-11 14:07:31.946089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.298 [2024-06-11 14:07:31.955392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.298 [2024-06-11 14:07:31.955851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.298 [2024-06-11 14:07:31.955875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.298 [2024-06-11 14:07:31.955889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.298 [2024-06-11 14:07:31.956126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.298 [2024-06-11 14:07:31.956365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.298 [2024-06-11 14:07:31.956380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.298 [2024-06-11 14:07:31.956392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.298 [2024-06-11 14:07:31.960142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.298 [2024-06-11 14:07:31.969401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.298 [2024-06-11 14:07:31.969991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.298 [2024-06-11 14:07:31.970043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.298 [2024-06-11 14:07:31.970075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.298 [2024-06-11 14:07:31.970683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.298 [2024-06-11 14:07:31.971244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.298 [2024-06-11 14:07:31.971260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.298 [2024-06-11 14:07:31.971272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.298 [2024-06-11 14:07:31.975011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.298 [2024-06-11 14:07:31.983615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.298 [2024-06-11 14:07:31.984174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.298 [2024-06-11 14:07:31.984224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.298 [2024-06-11 14:07:31.984256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.298 [2024-06-11 14:07:31.984859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.298 [2024-06-11 14:07:31.985310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.298 [2024-06-11 14:07:31.985325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.298 [2024-06-11 14:07:31.985338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.298 [2024-06-11 14:07:31.989075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.298 [2024-06-11 14:07:31.997683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.298 [2024-06-11 14:07:31.998238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.298 [2024-06-11 14:07:31.998290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.298 [2024-06-11 14:07:31.998323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.298 [2024-06-11 14:07:31.998927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.298 [2024-06-11 14:07:31.999453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.298 [2024-06-11 14:07:31.999468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.298 [2024-06-11 14:07:31.999485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.298 [2024-06-11 14:07:32.003220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.298 [2024-06-11 14:07:32.011824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.012395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.012418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.012435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.012680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.012919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.012934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.012947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.016687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.025957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.026542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.026594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.026626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.027180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.027419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.027435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.027447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.031200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.040025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.040601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.040653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.040685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.041274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.041682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.041698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.041711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.045447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.054052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.054628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.054680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.054712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.055236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.055474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.055501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.055514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.059250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.068075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.068619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.068643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.068656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.068893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.069131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.069146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.069159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.072904] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.082170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.082771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.082795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.082808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.083046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.083284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.083300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.083312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.087055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.096314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.096892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.096944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.096976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.097512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.097750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.097766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.097778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.101519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.110342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.110705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.110728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.110742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.110979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.111218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.111233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.111245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.114989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.124472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.125029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.125081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.125113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.125616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.125856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.125871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.125883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.129631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.138673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.139241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.139264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.299 [2024-06-11 14:07:32.139277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.299 [2024-06-11 14:07:32.139523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.299 [2024-06-11 14:07:32.139763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.299 [2024-06-11 14:07:32.139778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.299 [2024-06-11 14:07:32.139791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.299 [2024-06-11 14:07:32.143530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.299 [2024-06-11 14:07:32.152794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.299 [2024-06-11 14:07:32.153327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.299 [2024-06-11 14:07:32.153379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.300 [2024-06-11 14:07:32.153411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.300 [2024-06-11 14:07:32.153888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.300 [2024-06-11 14:07:32.154127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.300 [2024-06-11 14:07:32.154142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.300 [2024-06-11 14:07:32.154155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.300 [2024-06-11 14:07:32.157897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.300 [2024-06-11 14:07:32.166951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.300 [2024-06-11 14:07:32.167498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.300 [2024-06-11 14:07:32.167549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.300 [2024-06-11 14:07:32.167582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.300 [2024-06-11 14:07:32.168174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.300 [2024-06-11 14:07:32.168715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.300 [2024-06-11 14:07:32.168730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.300 [2024-06-11 14:07:32.168744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.300 [2024-06-11 14:07:32.172481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.300 [2024-06-11 14:07:32.181076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.300 [2024-06-11 14:07:32.181621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.300 [2024-06-11 14:07:32.181645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.300 [2024-06-11 14:07:32.181658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.300 [2024-06-11 14:07:32.181895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.300 [2024-06-11 14:07:32.182132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.300 [2024-06-11 14:07:32.182148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.300 [2024-06-11 14:07:32.182160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.300 [2024-06-11 14:07:32.185901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.300 [2024-06-11 14:07:32.195164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.300 [2024-06-11 14:07:32.195750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.300 [2024-06-11 14:07:32.195803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.300 [2024-06-11 14:07:32.195834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.300 [2024-06-11 14:07:32.196422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.300 [2024-06-11 14:07:32.197008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.300 [2024-06-11 14:07:32.197024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.300 [2024-06-11 14:07:32.197040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.300 [2024-06-11 14:07:32.200778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.209374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.209967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.210020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.210052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.210646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.211042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.211066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.211086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.217334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.224418] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.225021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.225045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.225059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.225315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.225580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.225602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.225616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.229687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.238627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.239202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.239252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.239284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.239793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.240032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.240048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.240061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.243801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.252633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.253214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.253264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.253295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.253790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.254028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.254043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.254055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.257794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.266828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.267405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.267429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.267442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.267688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.267927] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.267943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.267956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.271694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.280948] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.281527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.281580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.281612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.282118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.282357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.282373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.282385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.286161] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.294980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.295570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.295622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.295653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.296084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.296326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.296342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.296354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.300092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.309118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.309684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.309707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.309720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.309959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.310197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.310212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.310225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.313964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.323217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.323789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.323813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.323827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.324065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.560 [2024-06-11 14:07:32.324302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.560 [2024-06-11 14:07:32.324318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.560 [2024-06-11 14:07:32.324330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.560 [2024-06-11 14:07:32.328067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.560 [2024-06-11 14:07:32.337329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.560 [2024-06-11 14:07:32.337906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.560 [2024-06-11 14:07:32.337958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.560 [2024-06-11 14:07:32.337989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.560 [2024-06-11 14:07:32.338396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.338643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.338659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.338672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.342407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.351458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.352063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.352115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.352147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.352522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.352762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.352778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.352793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.356537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.365586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.366146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.366197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.366228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.366832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.367100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.367115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.367128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.370875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.379708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.380232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.380283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.380315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.380919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.381394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.381409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.381421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.385163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.393777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.394342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.394366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.394383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.394627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.394868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.394884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.394896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.398640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.407919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.408471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.408534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.408566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.409096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.409335] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.409351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.409363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.413276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.422116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.422688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.422711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.422724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.422962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.423201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.423216] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.423229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.426964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.436254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.436842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.436895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.436928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.437427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.437673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.437695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.437708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.441439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.450273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.450802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.450855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.450887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.451423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.451670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.451687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.451699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.561 [2024-06-11 14:07:32.455439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.561 [2024-06-11 14:07:32.464501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.561 [2024-06-11 14:07:32.464997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.561 [2024-06-11 14:07:32.465020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.561 [2024-06-11 14:07:32.465034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.561 [2024-06-11 14:07:32.465272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.561 [2024-06-11 14:07:32.465518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.561 [2024-06-11 14:07:32.465534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.561 [2024-06-11 14:07:32.465547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.821 [2024-06-11 14:07:32.469286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.821 [2024-06-11 14:07:32.478563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.821 [2024-06-11 14:07:32.479065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.821 [2024-06-11 14:07:32.479125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.821 [2024-06-11 14:07:32.479157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.821 [2024-06-11 14:07:32.479679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.821 [2024-06-11 14:07:32.479919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.821 [2024-06-11 14:07:32.479934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.821 [2024-06-11 14:07:32.479947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.821 [2024-06-11 14:07:32.483691] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.821 [2024-06-11 14:07:32.492756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.821 [2024-06-11 14:07:32.493354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.821 [2024-06-11 14:07:32.493405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.821 [2024-06-11 14:07:32.493437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.821 [2024-06-11 14:07:32.494044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.821 [2024-06-11 14:07:32.494539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.821 [2024-06-11 14:07:32.494554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.821 [2024-06-11 14:07:32.494567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.821 [2024-06-11 14:07:32.498306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.821 [2024-06-11 14:07:32.506923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.821 [2024-06-11 14:07:32.507507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.507531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.507544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.507782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.508021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.508036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.508048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.511788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.521053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.521597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.521621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.521634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.521871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.522109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.522124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.522137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.525883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.535157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.535743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.535794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.535827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.536425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.536764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.536780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.536793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.540625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.549228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.549789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.549812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.549826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.550062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.550301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.550316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.550328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.554069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.563331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.563814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.563838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.563851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.564088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.564326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.564341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.564354] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.568101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.577378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.577812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.577835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.577849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.578087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.578325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.578340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.578356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.582102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.591599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.592167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.592191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.592204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.592441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.592690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.592706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.592719] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.596459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.605747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.606328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.606380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.606412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.606893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.607133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.607149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.607162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.610903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.619942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.621335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.621367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.621382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.621636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.621876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.621892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.621905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.625647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.822 [2024-06-11 14:07:32.634068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.822 [2024-06-11 14:07:32.634666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.822 [2024-06-11 14:07:32.634690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.822 [2024-06-11 14:07:32.634704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.822 [2024-06-11 14:07:32.634942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.822 [2024-06-11 14:07:32.635180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.822 [2024-06-11 14:07:32.635195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.822 [2024-06-11 14:07:32.635208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.822 [2024-06-11 14:07:32.638946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.648216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.648716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.648740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.648754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.648990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.649229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.649244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.649257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.653000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.662264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.662772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.662797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.662810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.663045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.663285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.663300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.663312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.667053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.676355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.676860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.676884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.676897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.677133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.677377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.677392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.677405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.681149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.690421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.690984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.691036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.691069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.691670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.692226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.692242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.692254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.695995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.704601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.705166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.705217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.705250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.705773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.706071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.706096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.706116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.712367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:39.823 [2024-06-11 14:07:32.719755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:39.823 [2024-06-11 14:07:32.720302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:39.823 [2024-06-11 14:07:32.720353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:39.823 [2024-06-11 14:07:32.720386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:39.823 [2024-06-11 14:07:32.720882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:39.823 [2024-06-11 14:07:32.721143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:39.823 [2024-06-11 14:07:32.721160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:39.823 [2024-06-11 14:07:32.721174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:39.823 [2024-06-11 14:07:32.725245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.084 [2024-06-11 14:07:32.733968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.084 [2024-06-11 14:07:32.734530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.084 [2024-06-11 14:07:32.734582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.084 [2024-06-11 14:07:32.734615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.084 [2024-06-11 14:07:32.735099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.084 [2024-06-11 14:07:32.735338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.084 [2024-06-11 14:07:32.735354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.084 [2024-06-11 14:07:32.735367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.084 [2024-06-11 14:07:32.739130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.084 [2024-06-11 14:07:32.748189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.084 [2024-06-11 14:07:32.748666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.084 [2024-06-11 14:07:32.748691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.084 [2024-06-11 14:07:32.748704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.084 [2024-06-11 14:07:32.748954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.084 [2024-06-11 14:07:32.749194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.084 [2024-06-11 14:07:32.749209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.084 [2024-06-11 14:07:32.749221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.084 [2024-06-11 14:07:32.752968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.084 [2024-06-11 14:07:32.762234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.084 [2024-06-11 14:07:32.762796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.084 [2024-06-11 14:07:32.762820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.084 [2024-06-11 14:07:32.762834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.084 [2024-06-11 14:07:32.763073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.084 [2024-06-11 14:07:32.763311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.084 [2024-06-11 14:07:32.763328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.084 [2024-06-11 14:07:32.763341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.084 [2024-06-11 14:07:32.767079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.084 [2024-06-11 14:07:32.776343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.084 [2024-06-11 14:07:32.776848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.084 [2024-06-11 14:07:32.776872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.084 [2024-06-11 14:07:32.776888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.084 [2024-06-11 14:07:32.777127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.084 [2024-06-11 14:07:32.777366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.084 [2024-06-11 14:07:32.777381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.084 [2024-06-11 14:07:32.777394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.781141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.790414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.790967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.790991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.791004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.791241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.791488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.791504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.791517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.795254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.804534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.805010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.805033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.805045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.805282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.805527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.805544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.805557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.809292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.818576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.819055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.819105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.819138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.819741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.820307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.820327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.820339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.824088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.832721] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.833270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.833294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.833307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.833551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.833792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.833807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.833820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.837562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.846832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.847381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.847404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.847417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.847663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.847902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.847917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.847930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.851675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.860954] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.861536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.861589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.861622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.862210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.862570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.862586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.862599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.866333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.875174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.875740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.875793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.875826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.876349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.876596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.876612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.876624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.880359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.889179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.889668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.889692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.889705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.889942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.890181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.890197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.890209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.893948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.903218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.903781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.903805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.903818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.904055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.904295] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.904310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.904323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.085 [2024-06-11 14:07:32.908070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.085 [2024-06-11 14:07:32.917347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.085 [2024-06-11 14:07:32.917928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.085 [2024-06-11 14:07:32.917979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.085 [2024-06-11 14:07:32.918011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.085 [2024-06-11 14:07:32.918380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.085 [2024-06-11 14:07:32.918754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.085 [2024-06-11 14:07:32.918779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.085 [2024-06-11 14:07:32.918800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.086 [2024-06-11 14:07:32.925052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.086 [2024-06-11 14:07:32.932305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.086 [2024-06-11 14:07:32.932837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.086 [2024-06-11 14:07:32.932862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.086 [2024-06-11 14:07:32.932876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.086 [2024-06-11 14:07:32.933132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.086 [2024-06-11 14:07:32.933392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.086 [2024-06-11 14:07:32.933409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.086 [2024-06-11 14:07:32.933422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.086 [2024-06-11 14:07:32.937499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.086 [2024-06-11 14:07:32.946448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.086 [2024-06-11 14:07:32.947025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.086 [2024-06-11 14:07:32.947077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.086 [2024-06-11 14:07:32.947110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.086 [2024-06-11 14:07:32.947650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.086 [2024-06-11 14:07:32.947889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.086 [2024-06-11 14:07:32.947904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.086 [2024-06-11 14:07:32.947917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.086 [2024-06-11 14:07:32.951919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.086 [2024-06-11 14:07:32.960556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.086 [2024-06-11 14:07:32.961054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.086 [2024-06-11 14:07:32.961078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.086 [2024-06-11 14:07:32.961091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.086 [2024-06-11 14:07:32.961329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.086 [2024-06-11 14:07:32.961577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.086 [2024-06-11 14:07:32.961592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.086 [2024-06-11 14:07:32.961609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.086 [2024-06-11 14:07:32.965346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.086 [2024-06-11 14:07:32.974621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.086 [2024-06-11 14:07:32.975124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.086 [2024-06-11 14:07:32.975148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.086 [2024-06-11 14:07:32.975161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.086 [2024-06-11 14:07:32.975397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.086 [2024-06-11 14:07:32.975644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.086 [2024-06-11 14:07:32.975660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.086 [2024-06-11 14:07:32.975673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.086 [2024-06-11 14:07:32.979410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.086 [2024-06-11 14:07:32.988679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.086 [2024-06-11 14:07:32.989181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.086 [2024-06-11 14:07:32.989204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.086 [2024-06-11 14:07:32.989218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.086 [2024-06-11 14:07:32.989456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.086 [2024-06-11 14:07:32.989703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.086 [2024-06-11 14:07:32.989719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.086 [2024-06-11 14:07:32.989732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:32.993467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.002737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.003323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.003373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.003406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.003787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.004025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.004040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.004053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.007796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.016837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.017409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.017432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.017445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.017691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.017930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.017945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.017958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.021696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.030969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.031473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.031503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.031517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.031755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.031993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.032009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.032022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.035757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.045014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.045576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.045599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.045611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.045847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.046084] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.046099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.046112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.049851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.059104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.059681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.059733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.059764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.060353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.060878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.060894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.060906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.064642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.073229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.073785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.073848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.073879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.074467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.347 [2024-06-11 14:07:33.075066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.347 [2024-06-11 14:07:33.075082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.347 [2024-06-11 14:07:33.075094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.347 [2024-06-11 14:07:33.078833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.347 [2024-06-11 14:07:33.087423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.347 [2024-06-11 14:07:33.088017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.347 [2024-06-11 14:07:33.088069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.347 [2024-06-11 14:07:33.088100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.347 [2024-06-11 14:07:33.088705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.089150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.089165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.089178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.092910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.101501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.102061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.102112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.102143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.102749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.103151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.103167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.103179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.106918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.115498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.116069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.116091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.116104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.116339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.116585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.116601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.116614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.120347] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.129606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.130167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.130218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.130250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.130744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.131116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.131139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.131158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.137011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.144150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.144718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.144771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.144802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.145312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.145566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.145582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.145596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.149484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.158356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.158939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.158991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.159030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.159410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.159658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.159674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.159686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.163421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.172474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.172998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.173021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.173034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.173270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.173516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.173532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.348 [2024-06-11 14:07:33.173544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.348 [2024-06-11 14:07:33.177279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.348 [2024-06-11 14:07:33.186554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.348 [2024-06-11 14:07:33.187128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.348 [2024-06-11 14:07:33.187179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.348 [2024-06-11 14:07:33.187210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.348 [2024-06-11 14:07:33.187828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.348 [2024-06-11 14:07:33.188068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.348 [2024-06-11 14:07:33.188083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.349 [2024-06-11 14:07:33.188095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.349 [2024-06-11 14:07:33.191834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.349 [2024-06-11 14:07:33.200665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.349 [2024-06-11 14:07:33.201223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.349 [2024-06-11 14:07:33.201274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.349 [2024-06-11 14:07:33.201306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.349 [2024-06-11 14:07:33.201913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.349 [2024-06-11 14:07:33.202411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.349 [2024-06-11 14:07:33.202430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.349 [2024-06-11 14:07:33.202443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.349 [2024-06-11 14:07:33.206185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.349 [2024-06-11 14:07:33.214793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.349 [2024-06-11 14:07:33.215361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.349 [2024-06-11 14:07:33.215384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.349 [2024-06-11 14:07:33.215399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.349 [2024-06-11 14:07:33.215642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.349 [2024-06-11 14:07:33.215882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.349 [2024-06-11 14:07:33.215896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.349 [2024-06-11 14:07:33.215909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.349 [2024-06-11 14:07:33.219652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.349 [2024-06-11 14:07:33.228923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.349 [2024-06-11 14:07:33.229505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.349 [2024-06-11 14:07:33.229558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.349 [2024-06-11 14:07:33.229590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.349 [2024-06-11 14:07:33.230074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.349 [2024-06-11 14:07:33.230312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.349 [2024-06-11 14:07:33.230326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.349 [2024-06-11 14:07:33.230338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.349 [2024-06-11 14:07:33.234093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.349 [2024-06-11 14:07:33.243139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.349 [2024-06-11 14:07:33.243711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.349 [2024-06-11 14:07:33.243733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.349 [2024-06-11 14:07:33.243746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.349 [2024-06-11 14:07:33.243983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.349 [2024-06-11 14:07:33.244220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.349 [2024-06-11 14:07:33.244233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.349 [2024-06-11 14:07:33.244245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.349 [2024-06-11 14:07:33.247988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.257234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.257813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.257836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.257849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.258086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.258323] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.258336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.258348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.262090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.271339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.271937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.271959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.271972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.272208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.272445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.272458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.272470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.276208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.285464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.286040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.286062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.286075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.286311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.286554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.286569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.286581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.290312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.299570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.300117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.300139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.300152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.300395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.300638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.300652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.300665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.304390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.313649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.314133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.314155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.314168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.314404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.314648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.314663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.314675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.318408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.327663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.328233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.328255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.328267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.328511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.328755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.328768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.328780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.610 [2024-06-11 14:07:33.332525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.610 [2024-06-11 14:07:33.341792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.610 [2024-06-11 14:07:33.342372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.610 [2024-06-11 14:07:33.342423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.610 [2024-06-11 14:07:33.342454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.610 [2024-06-11 14:07:33.342896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.610 [2024-06-11 14:07:33.343135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.610 [2024-06-11 14:07:33.343149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.610 [2024-06-11 14:07:33.343164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.349330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.356785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.357390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.357441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.357472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.358074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.358599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.358614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.358628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.362690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.370924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.371406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.371428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.371441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.371730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.371968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.371982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.371994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.375732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.384990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.385545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.385597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.385629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.386029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.386266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.386279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.386291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.390031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.399066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.399651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.399702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.399734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.400322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.400901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.400915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.400927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.404663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.413252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.413836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.413886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.413917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.414521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.415012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.415026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.415038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.418766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.427349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.427933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.427984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.428016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.428618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.429062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.429075] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.429088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.432832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.441424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.442011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.442062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.442093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.442696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.443121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.443134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.443147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.446884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.455480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.456059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.456141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.456678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.456915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.456929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.456941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.460676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.469485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.470066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.470117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.470149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.470688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.470925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.470939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.470951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 [2024-06-11 14:07:33.474683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.611 [2024-06-11 14:07:33.483497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.611 [2024-06-11 14:07:33.484051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.611 [2024-06-11 14:07:33.484100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.611 [2024-06-11 14:07:33.484132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.611 [2024-06-11 14:07:33.484735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.611 [2024-06-11 14:07:33.485005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.611 [2024-06-11 14:07:33.485019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.611 [2024-06-11 14:07:33.485031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1669682 Killed "${NVMF_APP[@]}" "$@" 00:39:40.611 14:07:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:39:40.611 14:07:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:39:40.611 14:07:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:40.612 [2024-06-11 14:07:33.488769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1671045 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1671045 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1671045 ']' 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.612 [2024-06-11 14:07:33.497590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:40.612 14:07:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:40.612 [2024-06-11 14:07:33.498134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.612 [2024-06-11 14:07:33.498156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.612 [2024-06-11 14:07:33.498169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.612 [2024-06-11 14:07:33.498407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.612 [2024-06-11 14:07:33.498650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.612 [2024-06-11 14:07:33.498665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.612 [2024-06-11 14:07:33.498677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.612 [2024-06-11 14:07:33.502405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.612 [2024-06-11 14:07:33.511667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.612 [2024-06-11 14:07:33.512239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.612 [2024-06-11 14:07:33.512261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.612 [2024-06-11 14:07:33.512274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.612 [2024-06-11 14:07:33.512515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.612 [2024-06-11 14:07:33.512752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.612 [2024-06-11 14:07:33.512765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.612 [2024-06-11 14:07:33.512778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.612 [2024-06-11 14:07:33.516519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.525776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.526344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.526366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.526379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.526622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.526860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.526873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.873 [2024-06-11 14:07:33.526885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.873 [2024-06-11 14:07:33.530628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.539884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.540432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.540454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.540467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.540710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.540947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.540961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.873 [2024-06-11 14:07:33.540973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.873 [2024-06-11 14:07:33.544708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.550342] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:39:40.873 [2024-06-11 14:07:33.550396] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:40.873 [2024-06-11 14:07:33.553970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.554514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.554543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.554556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.554793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.555030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.555044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.873 [2024-06-11 14:07:33.555056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.873 [2024-06-11 14:07:33.558795] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.568163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.568715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.568739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.568752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.568989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.569226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.569240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.873 [2024-06-11 14:07:33.569252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.873 [2024-06-11 14:07:33.572986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.582244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.582801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.582824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.582838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.583073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.583310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.583323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.873 [2024-06-11 14:07:33.583335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.873 [2024-06-11 14:07:33.587075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.873 [2024-06-11 14:07:33.596328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.873 [2024-06-11 14:07:33.596911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.873 [2024-06-11 14:07:33.596934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.873 [2024-06-11 14:07:33.596947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.873 [2024-06-11 14:07:33.597184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.873 [2024-06-11 14:07:33.597421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.873 [2024-06-11 14:07:33.597435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.597447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 EAL: No free 2048 kB hugepages reported on node 1 00:39:40.874 [2024-06-11 14:07:33.601188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.610449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.611023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.611045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.611062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.611298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.611539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.611553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.611566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.615298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.624557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.625112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.625134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.625147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.625384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.625628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.625643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.625655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.629385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.638655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.639204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.639227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.639240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.639483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.639720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.639734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.639746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.643484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.649135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:40.874 [2024-06-11 14:07:33.652744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.653290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.653312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.653326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.653568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.653806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.653825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.653838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.657577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.666837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.667408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.667430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.667443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.667685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.667923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.667937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.667949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.671686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.680943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.681509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.681531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.681545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.681781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.682018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.682032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.682044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.685786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.695045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.695637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.695663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.695676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.695913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.696151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.696166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.696178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.699925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.709195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.709769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.709792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.709805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.710041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.710278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.710292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.710304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.714044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.723304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.874 [2024-06-11 14:07:33.723770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.874 [2024-06-11 14:07:33.723792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.874 [2024-06-11 14:07:33.723805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.874 [2024-06-11 14:07:33.724041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.874 [2024-06-11 14:07:33.724278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.874 [2024-06-11 14:07:33.724292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.874 [2024-06-11 14:07:33.724304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.874 [2024-06-11 14:07:33.728040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.874 [2024-06-11 14:07:33.736768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:40.874 [2024-06-11 14:07:33.736797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:40.874 [2024-06-11 14:07:33.736811] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:40.874 [2024-06-11 14:07:33.736823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:40.874 [2024-06-11 14:07:33.736833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:40.874 [2024-06-11 14:07:33.736910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.875 [2024-06-11 14:07:33.737009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:39:40.875 [2024-06-11 14:07:33.737052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.875 [2024-06-11 14:07:33.737306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.875 [2024-06-11 14:07:33.737862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.875 [2024-06-11 14:07:33.737884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.875 [2024-06-11 14:07:33.737898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.875 [2024-06-11 14:07:33.738134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.875 [2024-06-11 14:07:33.738372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.875 [2024-06-11 14:07:33.738390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.875 [2024-06-11 14:07:33.738404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.875 [2024-06-11 14:07:33.742141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.875 [2024-06-11 14:07:33.751398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.875 [2024-06-11 14:07:33.752005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.875 [2024-06-11 14:07:33.752032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.875 [2024-06-11 14:07:33.752045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.875 [2024-06-11 14:07:33.752283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.875 [2024-06-11 14:07:33.752528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.875 [2024-06-11 14:07:33.752543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.875 [2024-06-11 14:07:33.752555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.875 [2024-06-11 14:07:33.756290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.875 [2024-06-11 14:07:33.765557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.875 [2024-06-11 14:07:33.766047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.875 [2024-06-11 14:07:33.766073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.875 [2024-06-11 14:07:33.766086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.875 [2024-06-11 14:07:33.766324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.875 [2024-06-11 14:07:33.766569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.875 [2024-06-11 14:07:33.766584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.875 [2024-06-11 14:07:33.766597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:40.875 [2024-06-11 14:07:33.770330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:40.875 [2024-06-11 14:07:33.779601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:40.875 [2024-06-11 14:07:33.780122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:40.875 [2024-06-11 14:07:33.780146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:40.875 [2024-06-11 14:07:33.780160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:40.875 [2024-06-11 14:07:33.780396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:40.875 [2024-06-11 14:07:33.780639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:40.875 [2024-06-11 14:07:33.780654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:40.875 [2024-06-11 14:07:33.780667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.784404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.793670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.794180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.794204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.794218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.794456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.794701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.794716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.794729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.798465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.807734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.808309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.808332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.808346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.808589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.808827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.808842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.808854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.812590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.821865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.822434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.822456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.822469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.822713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.822950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.822965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.822977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.826712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.835979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.836550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.836574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.836587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.836828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.837066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.837080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.837092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.840833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.850090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.850577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.850618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.850632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.850870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.851108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.851122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.851135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.854875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.864122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.864695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.864721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.864735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.864975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.865213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.865227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.865240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.868982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.136 [2024-06-11 14:07:33.878235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.136 [2024-06-11 14:07:33.878599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.136 [2024-06-11 14:07:33.878622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.136 [2024-06-11 14:07:33.878635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.136 [2024-06-11 14:07:33.878871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.136 [2024-06-11 14:07:33.879109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.136 [2024-06-11 14:07:33.879123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.136 [2024-06-11 14:07:33.879139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.136 [2024-06-11 14:07:33.882878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.892370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.892926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.892949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.892963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.893198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.893435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.893450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.893462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.897196] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.906457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.907030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.907053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.907066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.907302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.907549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.907565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.907577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.911309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.920566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.921061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.921083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.921096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.921332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.921577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.921592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.921605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.925335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.934592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.935141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.935167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.935181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.935417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.935661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.935676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.935688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.939419] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.948678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.949257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.949279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.949293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.949536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.949773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.949788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.949800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.953817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.962865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.963413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.963437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.963451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.963692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.963930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.963944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.963957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.967694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.976946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.977528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.977551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.977565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.977801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.978047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.978061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.978073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.981809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:33.991063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:33.991551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:33.991574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:33.991588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:33.991823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:33.992060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:33.992074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:33.992086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:33.995826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:34.005080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:34.005626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:34.005650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:34.005664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:34.005900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:34.006139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:34.006153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:34.006166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.137 [2024-06-11 14:07:34.009903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.137 [2024-06-11 14:07:34.019166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.137 [2024-06-11 14:07:34.019733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.137 [2024-06-11 14:07:34.019756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.137 [2024-06-11 14:07:34.019770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.137 [2024-06-11 14:07:34.020006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.137 [2024-06-11 14:07:34.020244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.137 [2024-06-11 14:07:34.020259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.137 [2024-06-11 14:07:34.020271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.138 [2024-06-11 14:07:34.024011] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.138 [2024-06-11 14:07:34.033275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.138 [2024-06-11 14:07:34.033751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.138 [2024-06-11 14:07:34.033775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.138 [2024-06-11 14:07:34.033788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.138 [2024-06-11 14:07:34.034025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.138 [2024-06-11 14:07:34.034263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.138 [2024-06-11 14:07:34.034277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.138 [2024-06-11 14:07:34.034289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.138 [2024-06-11 14:07:34.038026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.047289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.047792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.398 [2024-06-11 14:07:34.047816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.398 [2024-06-11 14:07:34.047829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.398 [2024-06-11 14:07:34.048065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.398 [2024-06-11 14:07:34.048303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.398 [2024-06-11 14:07:34.048318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.398 [2024-06-11 14:07:34.048330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.398 [2024-06-11 14:07:34.052067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.061328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.061871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.398 [2024-06-11 14:07:34.061894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.398 [2024-06-11 14:07:34.061907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.398 [2024-06-11 14:07:34.062143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.398 [2024-06-11 14:07:34.062379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.398 [2024-06-11 14:07:34.062393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.398 [2024-06-11 14:07:34.062406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.398 [2024-06-11 14:07:34.066144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.075425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.075910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.398 [2024-06-11 14:07:34.075933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.398 [2024-06-11 14:07:34.075950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.398 [2024-06-11 14:07:34.076187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.398 [2024-06-11 14:07:34.076426] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.398 [2024-06-11 14:07:34.076440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.398 [2024-06-11 14:07:34.076452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.398 [2024-06-11 14:07:34.080188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.089442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.090015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.398 [2024-06-11 14:07:34.090037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.398 [2024-06-11 14:07:34.090050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.398 [2024-06-11 14:07:34.090286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.398 [2024-06-11 14:07:34.090530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.398 [2024-06-11 14:07:34.090545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.398 [2024-06-11 14:07:34.090557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.398 [2024-06-11 14:07:34.094284] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.103547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.104044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.398 [2024-06-11 14:07:34.104067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.398 [2024-06-11 14:07:34.104080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.398 [2024-06-11 14:07:34.104317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.398 [2024-06-11 14:07:34.104560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.398 [2024-06-11 14:07:34.104575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.398 [2024-06-11 14:07:34.104587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.398 [2024-06-11 14:07:34.108322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.398 [2024-06-11 14:07:34.117584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.398 [2024-06-11 14:07:34.118099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.118122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.118135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.118371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.118615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.118634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.118647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.122378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.131653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.132166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.132187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.132200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.132435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.132681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.132696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.132708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.136438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.145700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.146190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.146212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.146225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.146463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.146706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.146721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.146733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.150464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.159727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.160269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.160291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.160304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.160547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.160784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.160799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.160811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.164546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.173804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.174358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.174380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.174393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.174635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.174873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.174888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.174900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.178638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.187895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.188400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.188435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.188677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.188914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.188928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.188941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.192680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.201928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.202364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.202387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.202400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.202642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.202880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.202894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.202906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.206641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.216115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.216587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.216610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.216623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.216862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.217099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.217114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.217126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.220870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.230131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.230611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.230633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.230646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.230883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.231120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.231134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.231146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.234895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.244152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.244723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.244746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.244760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.399 [2024-06-11 14:07:34.244996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.399 [2024-06-11 14:07:34.245232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.399 [2024-06-11 14:07:34.245247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.399 [2024-06-11 14:07:34.245259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.399 [2024-06-11 14:07:34.248998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.399 [2024-06-11 14:07:34.258259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.399 [2024-06-11 14:07:34.258791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.399 [2024-06-11 14:07:34.258814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.399 [2024-06-11 14:07:34.258827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.400 [2024-06-11 14:07:34.259063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.400 [2024-06-11 14:07:34.259300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.400 [2024-06-11 14:07:34.259314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.400 [2024-06-11 14:07:34.259330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.400 [2024-06-11 14:07:34.263068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.400 [2024-06-11 14:07:34.272414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.400 [2024-06-11 14:07:34.272899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.400 [2024-06-11 14:07:34.272922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.400 [2024-06-11 14:07:34.272935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.400 [2024-06-11 14:07:34.273172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.400 [2024-06-11 14:07:34.273409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.400 [2024-06-11 14:07:34.273423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.400 [2024-06-11 14:07:34.273435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.400 [2024-06-11 14:07:34.277171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.400 [2024-06-11 14:07:34.286428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.400 [2024-06-11 14:07:34.286919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.400 [2024-06-11 14:07:34.286942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.400 [2024-06-11 14:07:34.286955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.400 [2024-06-11 14:07:34.287192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.400 [2024-06-11 14:07:34.287429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.400 [2024-06-11 14:07:34.287443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.400 [2024-06-11 14:07:34.287455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.400 [2024-06-11 14:07:34.291193] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.400 [2024-06-11 14:07:34.300449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.400 [2024-06-11 14:07:34.300950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.400 [2024-06-11 14:07:34.300973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.400 [2024-06-11 14:07:34.300986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.400 [2024-06-11 14:07:34.301223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.400 [2024-06-11 14:07:34.301459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.400 [2024-06-11 14:07:34.301474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.400 [2024-06-11 14:07:34.301494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.400 [2024-06-11 14:07:34.305227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.661 [2024-06-11 14:07:34.314493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.661 [2024-06-11 14:07:34.315068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.661 [2024-06-11 14:07:34.315095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.661 [2024-06-11 14:07:34.315108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.661 [2024-06-11 14:07:34.315343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.661 [2024-06-11 14:07:34.315587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.661 [2024-06-11 14:07:34.315601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.661 [2024-06-11 14:07:34.315614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.661 [2024-06-11 14:07:34.319346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.328613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.329030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.329053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.329066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.329302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.329549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.329564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.329576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.333323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.342808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.343289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.343311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.343325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.343568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.343808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.343823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.343835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.347571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.356828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.357312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.357334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.357347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.357590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.357831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.357846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.357859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.361593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.370855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.371404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.371427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.371440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.371682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.371920] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.371935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.371948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.375684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.384945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.385441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.385463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.385482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.385719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.385956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.385970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.385982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.389717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.399155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.399706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.399730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.399743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.399979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.400216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.400230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.400242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.403983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:41.662 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:39:41.662 14:07:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:41.662 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:41.662 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.662 [2024-06-11 14:07:34.413241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.413739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.413762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.413775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.414012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.414250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.414264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.414276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.418016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.427275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.427834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.427856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.427871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.428108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.428345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.428359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.428372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.432118] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.441378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.441746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.441768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.441782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.442017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.442255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.442269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.662 [2024-06-11 14:07:34.442282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.662 [2024-06-11 14:07:34.446026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.662 [2024-06-11 14:07:34.455510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.662 [2024-06-11 14:07:34.455937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.662 [2024-06-11 14:07:34.455960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.662 [2024-06-11 14:07:34.455973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.662 [2024-06-11 14:07:34.456208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.662 [2024-06-11 14:07:34.456444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.662 [2024-06-11 14:07:34.456459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.663 [2024-06-11 14:07:34.456471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 [2024-06-11 14:07:34.460207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.663 [2024-06-11 14:07:34.461114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 [2024-06-11 14:07:34.469692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.663 [2024-06-11 14:07:34.470170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.663 [2024-06-11 14:07:34.470193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.663 [2024-06-11 14:07:34.470206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.663 [2024-06-11 14:07:34.470443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.663 [2024-06-11 14:07:34.470690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.663 [2024-06-11 14:07:34.470705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.663 [2024-06-11 14:07:34.470717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.663 [2024-06-11 14:07:34.474451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.663 [2024-06-11 14:07:34.483714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.663 [2024-06-11 14:07:34.484213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.663 [2024-06-11 14:07:34.484236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.663 [2024-06-11 14:07:34.484249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.663 [2024-06-11 14:07:34.484491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.663 [2024-06-11 14:07:34.484733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.663 [2024-06-11 14:07:34.484747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.663 [2024-06-11 14:07:34.484759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.663 [2024-06-11 14:07:34.488497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.663 [2024-06-11 14:07:34.497767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.663 [2024-06-11 14:07:34.498347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.663 [2024-06-11 14:07:34.498370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.663 [2024-06-11 14:07:34.498383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.663 [2024-06-11 14:07:34.498626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.663 [2024-06-11 14:07:34.498865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.663 [2024-06-11 14:07:34.498880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.663 [2024-06-11 14:07:34.498892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.663 Malloc0 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 [2024-06-11 14:07:34.502625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.663 [2024-06-11 14:07:34.511872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.663 [2024-06-11 14:07:34.512443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 [2024-06-11 14:07:34.512466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb87400 with addr=10.0.0.2, port=4420 00:39:41.663 [2024-06-11 14:07:34.512485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87400 is same with the state(5) to be set 00:39:41.663 [2024-06-11 14:07:34.512721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb87400 (9): Bad file descriptor 00:39:41.663 [2024-06-11 14:07:34.512959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:39:41.663 [2024-06-11 14:07:34.512974] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:39:41.663 [2024-06-11 14:07:34.512986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:41.663 [2024-06-11 14:07:34.516723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:41.663 [2024-06-11 14:07:34.523071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.663 [2024-06-11 14:07:34.525985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.663 14:07:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1670003 00:39:41.663 [2024-06-11 14:07:34.558573] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:51.643 00:39:51.643 Latency(us) 00:39:51.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.643 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:51.643 Verification LBA range: start 0x0 length 0x4000 00:39:51.643 Nvme1n1 : 15.05 6239.80 24.37 9241.63 0.00 8219.80 851.97 42362.47 00:39:51.643 =================================================================================================================== 00:39:51.643 Total : 6239.80 24.37 9241.63 0.00 8219.80 851.97 42362.47 00:39:51.643 14:07:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:39:51.643 14:07:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:51.643 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.643 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:51.643 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:51.644 rmmod nvme_tcp 00:39:51.644 rmmod nvme_fabrics 00:39:51.644 rmmod nvme_keyring 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1671045 ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1671045 ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1671045' 00:39:51.644 killing process with pid 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1671045 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:51.644 14:07:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.022 14:07:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:53.022 00:39:53.022 real 0m28.162s 00:39:53.022 user 1m3.581s 00:39:53.022 sys 0m8.501s 00:39:53.022 14:07:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:53.022 14:07:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:39:53.022 ************************************ 00:39:53.023 END TEST nvmf_bdevperf 00:39:53.023 ************************************ 00:39:53.023 14:07:45 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:53.023 14:07:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:53.023 14:07:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:53.023 14:07:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:53.023 ************************************ 00:39:53.023 START TEST nvmf_target_disconnect 00:39:53.023 ************************************ 00:39:53.023 14:07:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:39:53.282 * Looking for test storage... 00:39:53.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.282 14:07:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:39:53.283 14:07:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:59.850 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:59.850 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:59.850 Found net devices under 0000:af:00.0: cvl_0_0 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:59.850 Found net devices under 0000:af:00.1: cvl_0_1 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.850 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.851 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:00.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:00.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:40:00.137 00:40:00.137 --- 10.0.0.2 ping statistics --- 00:40:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.137 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:00.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:00.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:40:00.137 00:40:00.137 --- 10.0.0.1 ping statistics --- 00:40:00.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.137 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:00.137 14:07:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:00.137 ************************************ 00:40:00.137 START TEST nvmf_target_disconnect_tc1 00:40:00.137 ************************************ 00:40:00.137 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:40:00.137 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:00.138 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:40:00.138 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:00.138 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:00.410 EAL: No free 2048 kB hugepages reported on node 1 00:40:00.410 [2024-06-11 14:07:53.170976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:00.410 [2024-06-11 14:07:53.171039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1466ec0 with addr=10.0.0.2, port=4420 00:40:00.410 [2024-06-11 14:07:53.171073] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:00.410 [2024-06-11 14:07:53.171096] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:00.410 [2024-06-11 14:07:53.171108] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:40:00.410 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:40:00.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:00.410 Initializing NVMe Controllers 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:00.410 00:40:00.410 real 0m0.155s 00:40:00.410 user 0m0.048s 00:40:00.410 sys 0m0.107s 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:00.410 ************************************ 00:40:00.410 END TEST nvmf_target_disconnect_tc1 00:40:00.410 ************************************ 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:00.410 ************************************ 00:40:00.410 START TEST nvmf_target_disconnect_tc2 00:40:00.410 ************************************ 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1676377 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1676377 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1676377 ']' 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:00.410 14:07:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:00.669 [2024-06-11 14:07:53.325873] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:00.669 [2024-06-11 14:07:53.325924] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:00.669 EAL: No free 2048 kB hugepages reported on node 1 00:40:00.669 [2024-06-11 14:07:53.429488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:00.669 [2024-06-11 14:07:53.516693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:00.669 [2024-06-11 14:07:53.516739] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:00.669 [2024-06-11 14:07:53.516752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:00.669 [2024-06-11 14:07:53.516764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:00.669 [2024-06-11 14:07:53.516775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:00.669 [2024-06-11 14:07:53.516921] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:00.669 [2024-06-11 14:07:53.520496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:00.669 [2024-06-11 14:07:53.520613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:00.669 [2024-06-11 14:07:53.520613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 Malloc0 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 [2024-06-11 14:07:54.317416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 [2024-06-11 14:07:54.349722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1676660 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:01.605 14:07:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:01.605 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.516 14:07:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1676377 00:40:03.516 14:07:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 [2024-06-11 14:07:56.381782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Read completed with error (sct=0, sc=8) 00:40:03.516 starting I/O failed 00:40:03.516 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 [2024-06-11 14:07:56.382095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 [2024-06-11 14:07:56.382401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Read completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 Write completed with error (sct=0, sc=8) 00:40:03.517 starting I/O failed 00:40:03.517 [2024-06-11 14:07:56.382630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:03.517 [2024-06-11 14:07:56.382874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.382899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.383189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.383206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.383362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.383378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.383665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.383683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.383931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.383948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.384186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.384202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.384448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.384464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.384751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.384768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.517 qpair failed and we were unable to recover it. 00:40:03.517 [2024-06-11 14:07:56.384947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.517 [2024-06-11 14:07:56.384964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.385251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.385291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.385688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.385730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.385925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.385941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.386170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.386191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.386470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.386491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.386680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.386697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.386869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.386886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.387190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.387230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.387506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.387546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.387913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.387953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.388310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.388350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.388633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.388650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.388950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.389240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.389256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.389510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.389527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.389827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.389844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.390069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.390086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.390364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.390381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.390718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.390737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.390888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.390901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.391190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.391203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.391473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.391490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.391705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.391718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.391992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.392004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.392232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.392245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.392526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.392539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.392853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.392866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.393192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.393205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.393448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.393460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.393755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.393769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.393982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.393995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.394263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.394275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.394568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.394580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.394797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.518 [2024-06-11 14:07:56.394810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.518 qpair failed and we were unable to recover it. 00:40:03.518 [2024-06-11 14:07:56.395081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.395093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.395331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.395344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.395617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.395630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.395921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.395933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.396151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.396163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.396464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.396481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.396776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.396789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.397083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.397096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.397363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.397674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.397689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.397981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.397994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.398287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.398299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.398612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.398653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.398991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.399031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.399383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.399422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.399761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.399803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.400099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.400140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.400496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.400537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.400898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.400938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.401296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.401336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.401675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.401717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.401975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.401987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.402257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.402268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.402567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.402580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.402815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.402827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.403065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.403077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.403330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.403342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.403610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.403623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.519 [2024-06-11 14:07:56.403897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.519 [2024-06-11 14:07:56.403909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.519 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.404149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.404161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.404452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.404464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.404698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.404711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.404978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.404991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.405286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.405298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.405523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.405536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.405694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.405706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.406011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.406051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.406259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.406298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.406663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.406704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.407033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.407073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.407369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.407408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.407680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.407693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.407857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.408087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.408099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.408308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.408348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.408627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.408668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.408931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.408970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.409317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.409357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.409657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.409669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.409960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.409974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.410137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.410149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.410387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.410427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.410816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.410858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.411075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.411114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.411498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.411539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.411866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.411906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.412179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.412218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.412523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.412564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.412898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.412910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.413121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.413133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.413415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.413427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.413644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.413657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.520 [2024-06-11 14:07:56.413948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.520 [2024-06-11 14:07:56.413960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.520 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.414249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.414262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.414572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.414608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.414873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.414913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.415175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.415215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.415578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.415590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.415862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.415874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.416111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.416123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.416319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.416331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.416635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.416648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.416926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.416938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.417232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.417527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.417539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.417853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.417865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.418158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.418169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.418398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.418439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.418840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.418852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.419080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.419093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.419229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.419241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.419535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.419548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.419829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.419842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.420132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.420144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.420447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.420458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.420779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.420791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.421080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.421092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.421358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.421371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.521 [2024-06-11 14:07:56.421658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.521 [2024-06-11 14:07:56.421670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.521 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.421960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.421974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.422212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.422224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.422513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.422526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.422807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.422819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.423028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.423333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.423345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.423622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.423661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.423921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.423961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.424324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.424364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.424718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.424759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.425106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.425134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.425415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.425455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.794 qpair failed and we were unable to recover it. 00:40:03.794 [2024-06-11 14:07:56.425808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.794 [2024-06-11 14:07:56.425849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.426149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.426188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.426548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.426595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.426931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.426971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.427344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.427384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.427735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.427776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.428065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.428105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.428455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.428514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.428741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.428753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.429042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.429054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.429328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.429341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.429616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.429629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.429923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.429936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.430216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.430229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.430537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.430550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.430716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.430728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.430940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.430953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.431229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.431241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.431479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.431492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.431762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.431774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.431989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.432001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.432298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.432311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.432523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.432535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.432806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.432819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.433084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.433096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.433391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.433403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.433701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.433713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.433918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.433931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.434240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.434285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.434618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.434660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.435004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.435017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.435303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.435315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.435519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.435532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.435794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.435807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.436122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.436161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.436512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.436553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.436908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.436920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.437224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.795 [2024-06-11 14:07:56.437264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.795 qpair failed and we were unable to recover it. 00:40:03.795 [2024-06-11 14:07:56.437604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.437617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.437918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.437958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.438303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.438343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.438620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.438633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.438930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.438942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.439245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.439545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.439557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.439842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.439882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.440237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.440277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.440610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.440651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.440983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.441022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.441321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.441361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.441635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.441676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.442001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.442013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.442298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.442310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.442540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.442553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.442849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.442862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.443139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.443151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.443463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.443474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.443755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.443767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.444037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.444049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.444313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.444325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.444594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.444607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.444818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.444830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.445121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.445133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.445408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.445420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.445692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.445705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.445999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.446011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.446292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.446304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.446593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.446606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.446896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.446942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.447214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.447254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.447592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.447633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.447988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.448027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.448390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.448432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.448711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.448723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.449016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.449028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.449244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.796 [2024-06-11 14:07:56.449256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.796 qpair failed and we were unable to recover it. 00:40:03.796 [2024-06-11 14:07:56.449470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.449755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.449767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.449981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.449993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.450295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.450307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.450455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.450467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.450701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.450713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.450998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.451038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.451369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.451408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.451762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.451794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.452092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.452133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.452493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.452535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.452889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.452929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.453143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.453183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.453465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.453539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.453833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.453856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.454108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.454147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.454514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.454555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.454903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.454915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.455073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.455085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.455359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.455400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.455775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.455818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.456078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.456090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.456364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.456377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.456667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.456680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.456901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.456913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.457171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.457184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.457451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.457463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.457675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.457688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.457953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.457965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.458176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.458188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.458396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.458408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.458696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.458709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.458946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.458960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.459280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.459319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.459521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.459562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.459863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.459903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.460258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.460298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.797 [2024-06-11 14:07:56.460589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.797 [2024-06-11 14:07:56.460629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.797 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.460903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.460944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.461299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.461339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.461622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.461663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.462014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.462054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.462420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.462461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.462826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.462866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.463226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.463266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.463639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.463679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.464037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.464077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.464377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.464416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.464771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.464784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.464991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.465003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.465290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.465336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.465699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.465740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.466094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.466106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.466239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.466252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.466467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.466482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.466701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.466713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.467030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.467042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.467336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.467348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.467668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.467709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.468047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.468465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.468514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.468868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.468908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.469261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.469273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.469634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.469675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.469996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.470008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.470308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.470320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.470485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.470498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.470770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.470782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.471073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.471085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.471377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.471389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.471669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.471711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.472066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.472106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.472391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.472437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.472795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.472807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.473068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.473081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.798 qpair failed and we were unable to recover it. 00:40:03.798 [2024-06-11 14:07:56.473364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.798 [2024-06-11 14:07:56.473376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.473682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.473723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.474066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.474106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.474488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.474885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.474925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.475281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.475321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.475684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.475725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.476080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.476121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.476425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.476464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.476634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.476647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.476859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.476899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.477252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.477291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.477577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.477618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.477880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.477919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.478273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.478312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.478593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.478634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.478969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.479009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.479312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.479352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.479616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.479628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.479920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.479932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.480219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.480231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.480517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.480529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.480820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.480832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.481079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.481091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.481384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.481396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.481674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.481687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.481898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.481910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.482202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.482214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.482490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.482502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.482700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.482713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.482941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.482954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.799 qpair failed and we were unable to recover it. 00:40:03.799 [2024-06-11 14:07:56.483250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.799 [2024-06-11 14:07:56.483263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.483531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.483544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.483832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.484125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.484137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.484354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.484367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.484653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.484666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.484910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.484925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.485213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.485226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.485574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.485615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.485909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.485922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.486190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.486203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.486508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.486549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.486892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.486932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.487225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.487265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.487529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.487570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.487874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.487913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.488263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.488303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.488600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.488642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.488997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.489037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.489399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.489440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.489789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.489831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.490122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.490162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.490423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.490462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.490771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.490812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.491165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.491204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.491504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.491546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.491899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.491940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.492244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.492257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.492467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.492483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.492760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.492773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.493062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.493075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.493306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.493319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.493520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.493533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.493802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.493814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.494029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.494042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.494241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.494254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.494498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.494511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.494780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.494792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.800 qpair failed and we were unable to recover it. 00:40:03.800 [2024-06-11 14:07:56.495072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.800 [2024-06-11 14:07:56.495085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.495285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.495297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.495610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.495623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.495833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.495846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.496089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.496101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.496440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.496473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.496837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.496877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.497247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.497286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.497636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.497682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.498024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.498064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.498434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.498474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.498823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.498835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.499119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.499132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.499344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.499356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.499556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.499569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.499846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.499886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.500160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.500200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.500543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.500585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.500956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.500996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.501223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.501263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.501566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.501607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.501908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.501921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.502147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.502160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.502491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.502504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.502718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.502730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.502928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.503230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.503243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.503535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.503547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.503823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.503863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.504224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.504265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.504531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.504572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.504910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.504923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.505213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.505226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.505439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.505451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.505748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.505761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.506068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.506081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.506366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.506379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.506620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.506632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.801 qpair failed and we were unable to recover it. 00:40:03.801 [2024-06-11 14:07:56.506929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.801 [2024-06-11 14:07:56.506942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.507116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.507128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.507354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.507366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.507584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.507625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.507979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.508019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.508378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.508417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.508756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.508797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.509031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.509043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.509331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.509344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.509584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.509597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.509811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.509826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.510051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.510092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.510398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.510438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.510793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.511294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.511373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.511768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.511819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.512072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.512085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.512374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.512387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.512654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.512667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.512954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.512966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.513353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.513393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.513729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.513741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.514033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.514045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.514344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.514384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.514738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.514779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.515137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.515186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.515554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.515595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.515933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.515946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.516239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.516251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.516410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.516422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.516709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.516722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.517044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.517056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.517342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.517354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.517571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.517584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.517802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.517814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.518104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.518116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.518330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.518343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.802 [2024-06-11 14:07:56.518622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.802 [2024-06-11 14:07:56.518637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.802 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.518930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.518942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.519161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.519173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.519438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.519451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.519727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.519740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.520006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.520018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.520289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.520328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.520689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.520702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.520976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.520988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.521221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.521233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.521506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.521519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.521796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.521808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.522006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.522018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.522308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.522320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.522634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.522647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.522942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.522982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.523216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.523256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.523551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.523592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.523959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.523998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.524351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.524390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.524779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.524821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.525149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.525189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.525517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.525559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.525911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.525951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.526302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.526341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.526716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.526757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.527033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.527431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.527471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.527756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.527796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.528152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.528192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.528532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.528583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.528795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.528807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.803 [2024-06-11 14:07:56.529116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.803 [2024-06-11 14:07:56.529129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.803 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.529362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.529374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.529665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.529678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.529884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.529897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.530180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.530192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.530413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.530425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.530699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.530712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.531000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.531012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.531216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.531518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.531531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.531816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.531829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.531980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.531992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.532211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.532224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.532521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.532562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.532845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.532884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.533156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.533168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.533411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.533423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.533575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.533588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.533799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.533811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.534107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.534147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.534495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.534537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.534829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.534841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.535123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.535135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.535421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.535434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.535650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.535662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.535950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.535962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.536159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.536172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.536437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.536449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.536668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.536680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.536897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.536909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.537126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.537166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.537379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.537824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.537865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.538194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.538234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.538586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.539005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.539047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.539396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.539436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.539802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.804 [2024-06-11 14:07:56.539843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.804 qpair failed and we were unable to recover it. 00:40:03.804 [2024-06-11 14:07:56.540209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.540249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.540602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.540643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.540911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.540923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.541134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.541147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.541361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.541373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.541665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.541677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.541948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.541960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.542236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.542248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.542515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.542546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.542801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.542813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.543011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.543026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.543314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.543326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.543591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.543603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.543803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.543815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.544089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.544101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.544393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.544405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.544690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.544703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.545014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.545054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.545392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.545432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.545847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.545924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.546300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.546343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.546628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.546669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.547001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.547041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.547394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.547434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.547720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.547760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.548118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.548161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.548500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.548541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.548831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.548843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.549153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.549165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.549329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.549342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.549559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.549572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.549852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.549892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.550272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.550311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.550663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.550704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.551070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.551110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.551463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.551517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.551848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.551888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.552191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.552232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.805 qpair failed and we were unable to recover it. 00:40:03.805 [2024-06-11 14:07:56.552560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.805 [2024-06-11 14:07:56.552602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.552883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.552923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.553277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.553316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.553543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.553584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.553843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.553855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.554128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.554140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.554437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.554450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.554666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.554679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.554898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.555118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.555130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.555297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.555309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.555589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.555601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.555882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.555920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.556250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.556290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.556619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.556660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.556924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.556936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.557154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.557166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.557366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.557378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.557642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.557654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.557867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.557879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.558175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.558187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.558473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.558489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.558795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.558818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.559114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.559154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.559414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.559454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.559760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.559801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.560159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.560199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.560533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.560575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.560849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.560889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.561230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.561270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.561642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.561683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.562039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.562080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.562437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.562487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.562766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.562807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.563068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.563108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.563400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.563440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.563779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.563820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.564086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.564126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.806 qpair failed and we were unable to recover it. 00:40:03.806 [2024-06-11 14:07:56.564500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.806 [2024-06-11 14:07:56.564541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.564900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.564940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.565279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.565336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.565698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.566095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.566134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.566521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.566562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.566783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.566823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.567198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.567238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.567593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.567635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.567960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.567972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.568195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.568207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.568423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.568435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.568705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.568718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.568956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.568968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.569291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.569306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.569534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.569546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.569745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.570032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.570073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.570333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.570373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.570724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.570758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.571047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.571059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.571334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.571346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.571634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.571647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.571864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.571876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.572020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.572032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.572253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.572292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.572622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.572664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.572924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.573316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.573357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.573578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.573619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.573951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.573987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.574199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.574211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.574411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.574423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.574722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.574734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.574953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.807 [2024-06-11 14:07:56.574965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.807 qpair failed and we were unable to recover it. 00:40:03.807 [2024-06-11 14:07:56.575269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.575309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.575612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.575652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.575975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.575987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.576296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.576308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.576570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.576583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.576803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.577085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.577097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.577389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.577401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.577627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.577640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.577929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.577941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.578242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.578254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.578462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.578475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.578761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.578773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.579010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.579022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.579229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.579241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.579530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.579571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.579912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.579953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.580250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.580262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.580583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.580596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.580873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.580917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.581277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.581317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.581658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.581691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.581961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.581973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.582215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.582227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.582504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.582516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.582786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.582799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.582999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.583011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.583317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.583329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.583529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.583541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.583829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.583842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.584041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.808 [2024-06-11 14:07:56.584054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.808 qpair failed and we were unable to recover it. 00:40:03.808 [2024-06-11 14:07:56.584350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.584362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.584537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.584550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.584819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.584832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.585107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.585147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.585499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.585540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.585836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.585876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.586136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.586176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.586531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.586572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.586952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.586992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.587328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.587368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.587652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.587692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.587954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.587993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.588274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.588314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.588663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.588704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.589081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.589122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.589442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.589454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.589693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.589706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.589923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.589939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.590240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.590254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.590527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.590541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.590815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.590830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.591108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.591122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.591414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.591429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.591756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.592060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.592075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.592285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.592573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.592588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.592862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.592877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.593173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.593191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.593457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.593521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.593881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.593932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.594224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.594240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.594444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.594458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.594677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.594694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.594976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.595024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.595289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.595338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.595678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.595729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.596091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.596106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.809 qpair failed and we were unable to recover it. 00:40:03.809 [2024-06-11 14:07:56.596398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.809 [2024-06-11 14:07:56.596413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.596696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.596711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.596872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.596887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.597177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.597209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.597598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.597648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.597920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.597935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.598157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.598171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.598467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.598488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.598763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.598778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.598992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.599006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.599208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.599223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.599494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.599509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.599715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.599730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.599970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.599985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.600252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.600267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.600550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.600565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.600802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.600817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.601111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.601125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.601416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.601431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.601715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.601730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.601938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.601952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.602222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.602236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.602558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.602573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.602868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.602882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.603046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.603061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.603281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.603296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.603515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.603530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.603678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.603692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.603976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.603990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.604297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.604353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.604715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.604771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.605028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.605043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.605317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.605331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.605620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.605635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.605926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.605941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.606245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.606260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.606472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.606500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.810 qpair failed and we were unable to recover it. 00:40:03.810 [2024-06-11 14:07:56.606793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.810 [2024-06-11 14:07:56.606808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.607031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.607045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.607330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.607345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.607616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.607631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.607919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.607934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.608152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.608166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.608315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.608330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.608612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.608627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.608870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.608885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.609086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.609101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.609440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.609455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.609680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.609695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.609964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.609980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.610272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.610324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.610608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.610686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.611008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.611053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.611353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.611394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.611595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.611611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.611864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.611878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.612179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.612228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.612621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.612667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.613026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.613066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.613397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.613437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.613740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.613757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.614050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.614065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.614277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.614292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.614567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.614582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.614800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.614815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.614973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.614988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.615276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.615290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.615523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.615538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.615828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.615843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.616166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.616181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.616485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.616500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.616721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.616736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.617033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.617266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.617283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.617602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.811 [2024-06-11 14:07:56.617616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.811 qpair failed and we were unable to recover it. 00:40:03.811 [2024-06-11 14:07:56.617838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.617856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.618098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.618146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.618498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.618548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.618839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.618890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.619301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.619316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.619492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.619726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.619740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.620036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.620093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.620457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.620524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.620899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.620948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.621318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.621333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.621557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.621572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.621879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.621935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.622246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.622295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.622683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.622732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.623081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.623123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.623408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.623448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.623705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.623747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.624002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.624018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.624291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.624306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.624473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.624493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.624826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.624844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.625067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.625084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.625246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.625260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.625551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.625566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.625784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.625799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.626068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.626083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.626321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.626336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.626554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.626569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.626846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.626861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.627087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.627102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.627385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.627399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.627615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.627629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.627860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.627875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.628068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.628083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.628430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.628472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.628798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.812 [2024-06-11 14:07:56.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.812 qpair failed and we were unable to recover it. 00:40:03.812 [2024-06-11 14:07:56.629134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.629149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.629419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.629434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.629659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.629675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.629899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.629913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.630238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.630252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.630464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.630489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.630761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.630775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.631061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.631076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.631301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.631318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.631521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.631536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.631754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.631768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.632080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.632097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.632352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.632367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.632661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.632677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.632931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.632946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.633160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.633175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.633446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.633461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.633743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.633795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.634079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.634127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.634469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.634491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.634772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.634787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.635059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.635074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.635361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.635377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.635598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.635613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.635874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.636099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.636154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.636422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.636489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.636858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.636906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.637238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.637253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.637465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.637485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.637791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.637805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.813 [2024-06-11 14:07:56.638045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.813 [2024-06-11 14:07:56.638059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.813 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.638280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.638294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.638533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.638548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.638828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.638843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.638987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.639002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.639208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.639223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.639444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.639458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.639807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.639822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.640098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.640113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.640313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.640327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.640567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.640616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.640894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.640948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.641305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.641320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.641559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.641574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.641800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.641815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.642052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.642067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.642235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.642249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.642571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.642620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.642905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.642962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.643324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.643338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.643633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.643648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.643920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.643936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.644205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.644220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.644514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.644529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.644754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.644769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.645088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.645102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.645317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.645332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.645571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.645586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.645736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.645751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.645965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.645979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.646273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.646287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.646579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.646594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.646836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.646850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.647162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.647177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.647468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.647489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.647789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.814 [2024-06-11 14:07:56.647804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.814 qpair failed and we were unable to recover it. 00:40:03.814 [2024-06-11 14:07:56.648097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.648112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.648328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.648343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.648618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.648632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.648957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.649006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.649341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.649399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.649764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.649812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.650123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.650138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.650434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.650449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.650750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.650765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.650933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.650948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.651249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.651583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.651598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.651846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.651861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.652083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.652097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.652376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.652390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.652609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.652624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.652924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.652963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.653264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.653313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.653663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.653711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.654016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.654065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.654418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.654433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.654674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.654689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.654935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.654950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.655163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.655448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.655462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.655711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.655725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.656017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.656032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.656247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.656261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.656463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.656483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.656801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.656816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.657123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.657138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.657430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.657444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.657635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.657651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.657948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.657965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.658242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.658503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.658518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.658809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.658824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.815 [2024-06-11 14:07:56.659118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.815 [2024-06-11 14:07:56.659133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.815 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.659379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.659396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.659616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.659630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.659942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.659957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.660173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.660187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.660482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.660497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.660663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.660678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.660945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.660959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.661282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.661296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.661588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.661603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.661823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.661837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.662129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.662144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.662346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.662361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.662587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.662602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.662900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.662915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.663132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.663146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.663405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.663419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.663631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.663649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.663845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.663859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.664127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.664141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.664460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.664475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.664696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.664711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.664939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.664954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.665285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.665299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.665553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.665568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.665863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.665877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.666112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.666127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.666409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.666423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.666708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.666723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.667015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.667030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.667263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.667277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.667497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.667512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.667751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.667766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.667981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.667995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.668220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.668235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.668491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.668550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.668903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.668951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.816 [2024-06-11 14:07:56.669304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.816 [2024-06-11 14:07:56.669318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.816 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.669542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.669556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.669718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.669733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.670030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.670078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.670543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.670631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.670875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.670919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.671254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.671294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.671674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.672023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.672101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.672411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.672455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.672731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.672752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.673011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.673031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.673340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.673360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.673687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.673704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.673939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.673954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.674176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.674191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.674488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.674503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.674824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.674838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.675134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.675148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.675446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.675460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.675739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.675755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.676000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.676014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.676314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.676362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.676721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.676771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.677117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.677173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.677512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.677527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.677806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.677820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.678127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.678141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.678441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.678455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.678756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.678771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.679027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.679042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.679375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.679396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.679683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.679703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.817 [2024-06-11 14:07:56.680011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.817 [2024-06-11 14:07:56.680031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:03.817 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.680339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.680392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.680778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.680828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.681016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.681072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.681452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.681467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.681644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.681659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.681963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.681977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.682200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.682215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.682375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.682389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.682693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.682742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.683088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.683136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.683499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.683561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.683927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.683975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.684350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.684367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.684635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.684650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.684962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.684977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.685264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.685312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.685667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.685715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.686015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.686063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.686344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.686359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.686568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.686583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.686899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.686946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.687363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.687618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.687633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.687925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.687939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.688258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.688273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.688474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.688495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.688785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.688799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.689017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.689031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.689265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.689279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.689518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.689533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.689748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.689977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.689991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.690211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.690225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.690437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.690452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.690662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:03.818 [2024-06-11 14:07:56.690677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:03.818 qpair failed and we were unable to recover it. 00:40:03.818 [2024-06-11 14:07:56.690894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.690910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.691135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.691150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.691303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.691317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.691522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.691538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.691755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.691769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.691925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.691940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.692159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.692173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.692467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.692488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.692803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.692818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.692982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.692997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.693152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.693166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.693404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.693418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.693665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.693681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.693979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.693994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.694208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.694223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.694402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.694432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.694674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.694717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.094 qpair failed and we were unable to recover it. 00:40:04.094 [2024-06-11 14:07:56.695000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.094 [2024-06-11 14:07:56.695041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.695265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.695305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.695585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.695837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.695856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.696139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.696158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.696325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.696341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.696595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.696644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.696930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.696977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.697188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.697236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.697522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.697570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.697855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.697902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.698175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.698223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.698568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.698610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.698957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.698998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.699211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.699257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.699470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.699489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.699765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.699777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.700131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.700171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.700448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.700501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.700780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.700821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.701159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.701199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.701529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.701570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.701944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.701984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.702279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.702314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.702599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.702611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.702878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.702891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.703181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.703193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.703498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.703540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.703821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.703861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.704144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.704184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.704406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.704445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.704763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.704804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.705085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.705124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.705422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.705461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.705756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.705797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.706147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.706187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.706513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.095 [2024-06-11 14:07:56.706554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.095 qpair failed and we were unable to recover it. 00:40:04.095 [2024-06-11 14:07:56.706885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.706925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.707243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.707283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.707570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.707611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.707813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.707853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.708202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.708241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.708537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.708578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.708857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.708897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.709128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.709168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.709527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.709568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.709871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.709911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.710193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.710233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.710609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.710870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.710910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.711262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.711302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.711513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.711554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.711773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.711814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.712142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.712182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.712469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.712537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.712889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.712930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.713296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.713336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.713678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.713691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.713915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.713927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.714166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.714178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.714404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.714416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.714575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.714587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.714884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.714896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.715161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.715173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.715503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.715516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.715730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.715744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.715894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.715906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.716173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.716185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.716401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.716413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.716570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.716875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.716887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.717107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.717147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.717348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.717387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.717744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.717785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.096 [2024-06-11 14:07:56.718065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.096 [2024-06-11 14:07:56.718105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.096 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.718435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.718474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.718847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.718888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.719218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.719257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.719638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.719679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.720057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.720069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.720371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.720383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.720596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.720609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.720828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.720840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.721039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.721051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.721359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.721387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.721687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.721727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.722114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.722387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.722427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.722697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.722709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.722974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.722987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.723224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.723236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.723450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.723463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.723757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.723770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.724041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.724053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.724261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.724273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.724485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.724498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.724786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.724798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.725083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.725095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.725315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.725327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.725479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.725492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.725809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.725849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.726154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.726193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.726421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.726434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.726583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.726596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.726887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.726899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.726993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.727006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.727215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.727227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.727448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.727461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.727681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.727694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.727971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.727984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.728148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.728161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.728366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.728406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.097 [2024-06-11 14:07:56.728694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.097 [2024-06-11 14:07:56.728735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.097 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.729084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.729124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.729379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.729391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.729604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.729616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.729890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.729902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.730169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.730181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.730381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.730393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.730716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.730729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.730896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.730908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.731199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.731211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.731418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.731430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.731647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.731659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.731880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.731892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.732099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.732111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.732395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.732407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.732723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.732735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.732955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.732967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.733259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.733270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.733484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.733497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.733728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.733740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.733960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.733972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.734183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.734195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.734342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.734354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.734517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.734529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.734686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.734699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.734915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.734927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.735167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.735207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.735412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.735452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.735659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.735700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.735982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.736021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.736316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.736356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.736574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.736587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.736762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.736774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.737050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.737096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.737362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.737402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.737687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.737700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.737917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.737929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.738165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.738177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.098 [2024-06-11 14:07:56.738425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.098 [2024-06-11 14:07:56.738437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.098 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.738593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.738606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.738898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.738938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.739230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.739270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.739595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.739608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.739907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.739919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.740032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.740043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.740208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.740220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.740431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.740470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.740817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.740857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.741142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.741182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.741490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.741531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.741744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.741784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.742092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.742132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.742515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.742558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.742839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.742879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.743106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.743146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.743406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.743445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.743708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.743721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.743991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.744003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.744246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.744259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.744469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.744486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.744698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.744711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.745018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.745031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.745202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.745214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.745497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.745538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.745813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.745853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.746148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.746188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.746542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.746583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.746792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.746831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.747117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.099 [2024-06-11 14:07:56.747157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.099 qpair failed and we were unable to recover it. 00:40:04.099 [2024-06-11 14:07:56.747368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.747408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.747758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.747771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.747987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.748027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.748324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.748364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.748538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.748552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.748759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.748798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.749085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.749447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.749460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.749623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.749635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.749869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.749881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.750168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.750208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.750524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.750566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.750780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.750820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.751196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.751235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.751535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.751577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.751858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.751898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.752225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.752265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.752586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.752599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.752797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.752810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.753036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.753048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.753315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.753327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.753419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.753430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.753651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.753663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.753869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.753881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.754011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.754024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.754240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.754252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.754454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.754467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.754735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.754748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.754895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.754907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.755118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.755157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.755380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.755419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.755790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.755832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.756058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.756098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.756385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.756667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.756680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.756829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.756842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.756982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.756994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.757229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.100 [2024-06-11 14:07:56.757241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.100 qpair failed and we were unable to recover it. 00:40:04.100 [2024-06-11 14:07:56.757470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.757486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.757752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.757765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.757969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.757982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.758131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.758144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.758357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.758397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.758775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.758818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.759114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.759160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.759422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.759462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.759855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.759895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.760154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.760193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.760455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.760506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.760780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.760792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.761029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.761041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.761256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.761268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.761556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.761569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.761793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.761833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.762037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.762077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.762351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.762391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.762591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.762632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.762958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.762998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.763331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.763374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.763586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.763599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.763812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.763824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.764090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.764102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.764259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.764271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.764433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.764445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.764800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.764841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.765173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.765214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.765504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.765545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.765810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.765850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.766210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.766250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.766591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.766632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.766963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.767002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.767285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.767325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.767646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.767659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.767945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.767957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.768158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.768170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.768381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.101 [2024-06-11 14:07:56.768393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.101 qpair failed and we were unable to recover it. 00:40:04.101 [2024-06-11 14:07:56.768610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.768622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.768770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.768783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.768930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.768942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.769222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.769235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.769449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.769461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.769677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.769689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.769975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.769988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.770203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.770215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.770426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.770441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.770729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.770741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.770901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.770914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.771187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.771199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.771529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.771542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.771703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.771715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.771921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.771960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.772290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.772330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.772720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.772732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.773026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.773038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.773188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.773200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.773489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.773716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.773728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.773878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.773891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.774094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.774117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.774330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.774341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.774576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.774588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.774795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.774807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.774953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.774966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.775164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.775176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.775417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.775429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.775572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.775584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.775816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.776123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.776163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.776372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.776412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.776722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.776763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.777050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.777090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.777369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.777408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.777624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.777637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.102 qpair failed and we were unable to recover it. 00:40:04.102 [2024-06-11 14:07:56.777734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.102 [2024-06-11 14:07:56.777746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.777905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.777917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.778137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.778175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.778438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.778509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.778831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.778844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.778979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.778992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.779248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.779260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.779527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.779541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.779753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.779765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.780038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.780265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.780277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.780500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.780514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.780780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.780792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.780988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.781000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.781266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.781279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.781581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.781604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.781831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.781844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.781990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.782003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.782282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.782322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.782670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.782711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.782931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.782972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.783299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.783343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.783550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.783563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.783774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.783787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.784027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.784040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.784244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.784256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.784452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.784464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.784683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.784737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.785001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.785041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.785326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.785366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.785713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.785754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.103 [2024-06-11 14:07:56.786038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.103 [2024-06-11 14:07:56.786077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.103 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.786403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.786444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.786652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.786665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.786880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.786892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.787160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.787172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.787393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.787432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.787797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.787839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.788176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.788216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.788381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.788393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.788632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.788674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.789000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.789040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.789306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.789318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.789488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.789500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.789661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.789674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.789879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.789918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.790248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.790288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.790592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.790632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.790889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.790929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.791281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.791321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.791596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.791637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.791992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.792037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.792245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.792257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.792385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.792397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.792559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.792571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.792859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.792871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.793099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.793111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.793321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.793333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.793641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.793653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.793780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.794002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.794014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.794246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.794258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.794493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.794505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.794793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.794805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.795055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.795095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.795430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.795470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.795823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.795863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.796076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.796116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.796334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.104 [2024-06-11 14:07:56.796373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.104 qpair failed and we were unable to recover it. 00:40:04.104 [2024-06-11 14:07:56.796606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.796647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.796947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.796986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.797197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.797237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.797506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.797546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.797695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.797708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.797996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.798036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.798308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.798348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.798684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.798697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.798917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.798929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.799142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.799154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.799444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.799456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.799592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.799604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.799830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.799842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.799998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.800009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.800172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.800184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.800335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.800347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.800616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.800629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.800861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.800873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.801089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.801379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.801391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.801609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.801622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.801825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.801837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.802067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.802080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.802367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.802488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.802499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.802726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.802738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.802959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.802972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.803099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.803111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.803405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.803444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.803736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.803776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.804104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.804144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.804492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.804847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.804887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.805247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.805287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.805643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.805685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.806007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.806019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.806239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.806279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.105 [2024-06-11 14:07:56.806545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.105 [2024-06-11 14:07:56.806557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.105 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.806843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.806855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.807168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.807180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.807470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.807780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.807791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.808079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.808091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.808308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.808320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.808537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.808550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.808760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.808772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.809061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.809072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.809311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.809322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.809544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.809556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.809801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.809813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.810077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.810090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.810252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.810264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.810470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.810699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.810712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.810976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.811015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.811372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.811411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.811716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.811756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.812067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.812107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.812303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.812342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.812607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.812648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.812920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.812933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.813146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.813158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.813407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.813421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.813616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.813629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.813846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.813858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.814033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.814045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.814243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.814254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.814454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.814466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.814704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.814746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.815043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.815083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.815357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.815397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.815703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.815744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.816021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.816062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.816390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.816430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.816817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.816857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.106 qpair failed and we were unable to recover it. 00:40:04.106 [2024-06-11 14:07:56.817133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.106 [2024-06-11 14:07:56.817174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.817488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.817529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.817802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.817842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.818912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.818937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.819284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.819313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.819673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.819716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.819939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.819980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.820195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.820236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.820559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.820571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.820839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.820852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.821015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.821027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.821351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.821365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.821629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.821641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.821884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.821925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.822289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.822329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.822594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.822606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.822803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.822815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.823050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.823062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.823337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.823350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.823617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.823630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.823923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.823936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.824241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.824254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.824522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.824535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.824834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.824874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.825238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.825278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.825608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.825874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.825886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.826163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.826320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.826332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.826568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.826608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.826817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.826856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.827188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.827228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.827447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.827520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.827733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.828045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.828086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.828391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.828431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.828710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.828751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.829018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.107 [2024-06-11 14:07:56.829059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.107 qpair failed and we were unable to recover it. 00:40:04.107 [2024-06-11 14:07:56.829285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.829325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.829655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.829696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.829984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.830024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.830302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.830343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.830650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.830663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.830866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.830888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.831038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.831050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.831279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.831319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.831622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.831663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.831883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.831896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.832159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.832171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.832463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.832480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.832630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.832643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.832795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.832807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.833076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.833089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.833313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.833326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.833538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.833551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.833765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.833777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.833972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.834210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.834223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.834381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.834394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.834597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.834893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.834905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.835201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.835214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.835427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.835440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.835661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.835674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.835894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.835906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.836131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.836144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.836407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.836420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.836629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.836643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.836859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.836871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.108 [2024-06-11 14:07:56.837169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.108 [2024-06-11 14:07:56.837209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.108 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.837575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.837623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.837835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.837847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.838112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.838124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.838337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.838349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.838553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.838566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.838781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.838820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.839146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.839186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.839474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.839495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.839712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.839725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.839976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.839993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.840223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.840238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.840442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.840459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.840629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.840642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.840874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.840886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.841095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.841110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.841311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.841325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.841543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.841557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.841774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.841823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.842098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.842154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.842439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.842499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.842853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.842901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.843184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.843234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.843531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.843580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.843841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.843856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.844155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.844171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.844395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.844410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.844655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.844670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.844886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.844901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.845187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.845202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.845459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.845474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.845693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.845708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.845876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.845891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.846117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.846132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.846400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.846415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.846683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.846698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.846982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.846996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.847199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.109 [2024-06-11 14:07:56.847217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.109 qpair failed and we were unable to recover it. 00:40:04.109 [2024-06-11 14:07:56.847516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.847533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.847765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.847779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.848048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.848063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.848309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.848323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.848542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.848557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.848757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.848772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.849098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.849113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.849271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.849286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.849401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.849415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.849618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.849633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.849833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.849848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.850049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.850063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.850326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.850340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.850493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.850508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.850738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.850787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.851076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.851127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.851415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.851463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.851776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.851825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.852148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.852163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.852481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.852496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.852768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.852782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.853055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.853070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.853377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.853392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.853507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.853521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.853791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.853805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.853976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.853991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.854192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.854207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.854365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.854381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.854619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.854635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.854856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.854871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.855079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.855094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.855389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.855403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.855523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.855537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.855825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.855840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.856133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.856150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.856370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.856385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.856624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.856642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.856788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.110 [2024-06-11 14:07:56.856803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.110 qpair failed and we were unable to recover it. 00:40:04.110 [2024-06-11 14:07:56.856955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.856969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.857237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.857252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.857398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.857413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.857634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.857649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.857850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.857865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.858196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.858211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.858446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.858461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.858684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.858700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.858864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.858879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.859146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.859160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.859258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.859272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.859540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.859555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.859708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.859722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.859990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.860004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.860158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.860173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.860390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.860439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.860866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.860916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.861101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.861116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.861275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.861290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.861515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.861822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.861837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.861996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.862010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.862254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.862269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.862576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.862590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.862791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.862806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.863054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.863068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.863289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.863304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.863471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.863492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.863805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.863820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.864038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.864284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.864299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.864449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.864463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.864711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.864727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.864941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.864956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.865191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.865206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.865501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.865517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.865744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.865759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.866070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.111 [2024-06-11 14:07:56.866085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.111 qpair failed and we were unable to recover it. 00:40:04.111 [2024-06-11 14:07:56.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.866342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.866508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.866524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.866797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.866811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.867041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.867056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.867278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.867292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.867445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.867459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.867770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.867821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.868145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.868194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.868552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.868567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.868796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.868853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.869199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.869247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.869486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.869502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.869773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.869788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.870014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.870028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.870246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.870260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.870538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.870553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.870696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.870710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.870856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.870872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.871076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.871090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.871379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.871394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.871615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.871630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.871785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.871800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.871941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.871956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.872172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.872204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.872489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.872537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.872997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.873074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.873274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.873317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.873595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.873638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.873918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.873959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.874132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.874148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.874316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.874330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.874435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.874450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.874670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.874685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.874954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.874969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.875236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.875251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.875472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.875493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.875696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.112 [2024-06-11 14:07:56.875711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.112 qpair failed and we were unable to recover it. 00:40:04.112 [2024-06-11 14:07:56.876007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.876170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.876561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.876741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.876967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.876982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.877140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.877154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.877313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.877327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.877460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.877475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.877711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.877725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.878015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.878029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.878172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.878332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.878349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.878593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.878608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.878781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.879063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.879277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.879292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.879577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.879625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.879822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.879871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.880157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.880210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.880499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.880542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.880907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.880946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.881176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.881215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.881566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.881607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.881787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.881803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.882108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.882122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.882323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.882338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.882514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.882529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.882752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.882800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.883155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.883203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.883471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.883536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.883695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.883710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.883931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.883946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.884165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.113 [2024-06-11 14:07:56.884179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.113 qpair failed and we were unable to recover it. 00:40:04.113 [2024-06-11 14:07:56.884309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.884326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.884594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.884610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.884826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.884840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.885138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.885308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.885321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.885535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.885549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.885761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.885776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.885986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.886154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.886169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.886309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.886323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.886551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.886567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.886813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.886828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.887106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.887121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.887235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.887409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.887424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.887566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.887583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.887800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.887815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.888082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.888097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.888236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.888250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.888458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.888544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.888827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.888875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.889167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.889215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.889516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.889566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.889839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.889854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.890120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.890135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.890338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.890352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.890642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.890657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.890824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.890839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.890995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.891012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.891206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.891220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.891500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.891549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.891935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.892003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.892295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.892311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.892581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.892596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.892807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.892821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.893025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.893041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.893242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.893256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.893429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.114 [2024-06-11 14:07:56.893520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.114 qpair failed and we were unable to recover it. 00:40:04.114 [2024-06-11 14:07:56.893807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.893856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.894092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.894143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.894431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.894513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.894975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.895375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.895419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.895797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.895839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.896011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.896032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.896217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.896237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.896508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.896529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.896758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.896778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.897005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.897024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.897246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.897262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.897469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.897495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.897715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.897730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.897938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.897953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.898154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.898168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.898415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.898430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.898723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.898738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.898882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.898896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.899187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.899201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.899440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.899455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.899752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.899767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.899996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.900011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.900320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.900335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.900548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.900563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.900779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.900794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.900999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.901013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.901168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.901182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.901326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.901340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.901497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.901513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.901805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.901820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.902043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.902058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.902257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.902272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.115 qpair failed and we were unable to recover it. 00:40:04.115 [2024-06-11 14:07:56.902424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.115 [2024-06-11 14:07:56.902438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.902585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.902599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.902891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.902906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.903247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.903296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.903562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.903579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.903817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.903831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.904032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.904047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.904324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.904340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.904624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.904639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.904909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.904926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.905127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.905141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.905409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.905424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.905634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.905648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.905873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.905888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.906157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.906171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.906290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.906307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.906528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.906542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.906765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.906780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.906982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.906997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.907212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.907230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.907460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.907474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.907749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.907764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.907926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.907940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.908088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.908103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.908315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.908330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.908420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.908434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.908664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.908680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.908956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.908971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.909137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.909152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.909364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.909378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.909584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.909599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.909817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.909831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.910051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.910066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.910268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.910282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.910505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.910520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.910737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.910751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.910970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.910986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.116 qpair failed and we were unable to recover it. 00:40:04.116 [2024-06-11 14:07:56.911230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.116 [2024-06-11 14:07:56.911244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.911446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.911460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.911672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.911687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.911904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.911919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.912187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.912201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.912421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.912435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.912706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.912723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.913032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.913047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.913346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.913361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.913524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.913539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.913847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.913861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.914001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.914016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.914246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.914263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.914508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.914523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.914858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.914873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.915034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.915051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.915206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.915221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.915366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.915380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.915551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.915566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.915859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.915895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.916179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.916228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.916599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.916631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.916943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.916961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.917256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.917270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.917560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.917575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.917685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.917699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.917995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.918009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.918215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.918229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.117 [2024-06-11 14:07:56.918499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.117 [2024-06-11 14:07:56.918514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.117 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.918724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.918742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.918953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.919002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.919359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.919407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.919716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.919765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.920071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.920118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.920490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.920539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.920822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.920837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.920940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.920954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.921112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.921127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.921290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.921325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.921569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.921619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.921891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.921931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.922216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.922256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.922534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.922577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.922836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.922877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.923176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.923217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.923549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.923591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.923872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.923912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.924255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.924270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.924505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.924517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.924734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.924747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.924861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.924873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.925151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.925163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.925427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.925440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.925603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.925616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.925847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.925887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.926160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.926201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.926503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.926544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.926804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.926844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.927165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.927177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.927392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.927405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.927635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.927648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.927858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.927870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.928066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.928079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.928352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.928392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.928665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.928705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.118 qpair failed and we were unable to recover it. 00:40:04.118 [2024-06-11 14:07:56.929038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.118 [2024-06-11 14:07:56.929078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.929406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.929447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.929768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.929809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.930152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.930164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.930336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.930349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.930563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.930576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.930794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.930835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.931117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.931157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.931445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.931500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.931779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.931819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.932052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.932064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.932202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.932214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.932428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.932440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.932707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.932720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.932880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.932894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.933120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.933342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.933382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.933643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.933684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.933966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.934005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.934298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.934338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.934666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.934708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.935044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.935084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.935459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.935514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.935812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.935824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.936047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.936211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.936224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.936354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.936366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.936582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.936594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.936797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.936810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.937035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.937048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.937315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.937327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.937555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.937568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.937869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.937881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.938057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.938069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.119 [2024-06-11 14:07:56.938332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.119 [2024-06-11 14:07:56.938344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.119 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.938583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.938602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.938845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.938857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.939020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.939033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.939320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.939332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.939605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.939723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.939734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.939956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.939969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.940211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.940223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.940452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.940465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.940759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.940772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.941074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.941115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.941337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.941377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.941727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.941768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.942097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.942137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.942520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.942562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.942920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.942956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.943180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.943192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.943410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.943422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.943620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.943632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.943792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.943806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.944096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.944109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.944402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.944414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.944648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.944661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.944953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.944966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.945192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.945491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.945513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.945815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.945856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.946130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.946169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.946451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.946519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.946796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.946837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.947197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.947237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.947580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.947621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.947941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.947953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.948194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.948206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.948471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.948486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.948708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.948720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.120 [2024-06-11 14:07:56.948985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.120 [2024-06-11 14:07:56.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.120 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.949278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.949290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.949504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.949517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.949724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.949736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.950056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.950096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.950370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.950411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.950680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.950727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.951050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.951063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.951358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.951370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.951567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.951579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.951864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.951877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.952078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.952091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.952320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.952332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.952545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.952557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.952862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.952902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.953276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.953316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.953604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.953645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.953966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.953978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.954191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.954203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.954486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.954499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.954771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.955087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.955127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.955354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.955395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.955723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.955770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.956107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.956148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.956457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.956508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.956859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.956899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.957084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.957096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.957228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.957240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.957439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.957451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.957681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.957693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.957908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.957920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.958158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.958170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.958385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.958397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.958627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.958639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.121 qpair failed and we were unable to recover it. 00:40:04.121 [2024-06-11 14:07:56.958865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.121 [2024-06-11 14:07:56.958877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.959094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.959135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.959497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.959539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.959807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.959986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.959998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.960142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.960154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.960427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.960439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.960649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.960661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.960891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.960903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.961235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.961246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.961484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.961497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.961771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.961783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.961917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.961929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.962229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.962242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.962523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.962535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.962823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.962835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.963040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.963052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.963216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.963229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.963447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.963495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.963854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.963894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.964249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.964289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.964587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.964628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.964890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.964930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.965155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.965194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.965466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.965515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.965729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.965769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.966101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.966140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.966349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.966388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.966671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.966719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.967063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.967075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.967364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.967375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.122 qpair failed and we were unable to recover it. 00:40:04.122 [2024-06-11 14:07:56.967594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.122 [2024-06-11 14:07:56.967606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.967894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.967906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.968122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.968134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.968274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.968286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.968577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.968619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.968826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.968866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.969109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.969122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.969354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.969377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.969645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.969657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.969924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.969936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.970091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.970103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.970318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.970330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.970653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.970694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.970960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.971000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.971314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.971326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.971602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.971776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.971788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.972075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.972088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.972306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.972319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.972647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.972660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.972885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.972897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.973209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.973221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.973514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.973527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.973752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.973764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.974081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.974093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.974307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.974320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.974585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.974598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.974897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.974909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.975119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.975132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.975371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.975384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.975583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.975596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.975910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.975922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.976140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.976152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.976368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.976380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.976562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.976575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.976775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.976788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.977006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.977018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.977230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.123 [2024-06-11 14:07:56.977245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.123 qpair failed and we were unable to recover it. 00:40:04.123 [2024-06-11 14:07:56.977450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.977462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.977684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.977696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.977899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.977912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.978188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.978228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.978578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.978620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.978989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.979029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.979331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.979371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.979651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.979981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.980020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.980293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.980334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.980617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.980657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.980931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.980943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.981120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.981133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.981374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.981414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.981778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.982129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.982141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.982276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.982288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.982588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.982601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.982784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.982797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.982944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.982957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.983177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.983190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.983434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.983474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.983743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.983782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.984080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.984120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.984473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.984522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.984887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.984927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.985192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.985205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.985445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.985457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.985640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.985653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.985815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.985827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.986037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.986049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.986263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.986548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.986561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.986860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.986872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.987025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.987037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.124 [2024-06-11 14:07:56.987168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.124 [2024-06-11 14:07:56.987180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.124 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.987338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.987350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.987485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.987499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.987714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.987726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.987935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.987949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.988149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.988162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.988388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.988400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.988532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.988544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.988706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.988718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.988942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.988954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.989254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.989266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.989410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.989423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.989645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.989657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.989922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.989935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.990148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.990160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.990305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.990317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.990537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.990549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.990749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.990761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.991052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.991065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.991350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.991362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.991507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.991520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.991786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.991798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.991933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.992191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.992204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.992411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.992424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.992637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.992650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.992920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.992932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.993154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.993195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.993453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.993504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.993767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.993806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.994067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.994106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.994328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.994369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.994648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.994689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.994901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.994941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.995188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.995200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.995484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.995497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.995701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.995714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.402 [2024-06-11 14:07:56.995856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.402 [2024-06-11 14:07:56.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.402 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.996016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.996028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.996245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.996257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.996428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.996468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.996751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.996791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.996989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.997029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.997354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.997394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.997656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.997703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.997980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.998020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.998213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.998225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.998429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.998468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.998807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.998847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.999048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.999061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.999336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.999376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.999570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.999611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:56.999939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:56.999979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.000173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.000213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.000541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.000582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.000840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.000879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.001146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.001158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.001425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.001436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.001551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.001563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.001722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.001734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.002025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.002038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.002302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.002315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.002531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.002544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.002742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.002754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.002969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.002981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.003214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.003254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.003466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.003518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.003802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.003841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.004164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.004203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.004487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.004529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.004879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.004918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.005178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.005191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.005455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.005467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.005679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.403 [2024-06-11 14:07:57.005692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.403 qpair failed and we were unable to recover it. 00:40:04.403 [2024-06-11 14:07:57.005830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.005842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.006072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.006112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.006456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.006510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.006884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.006924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.007232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.007553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.007596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.007869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.007909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.008191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.008231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.008495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.008536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.008888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.008928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.009279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.009324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.009537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.009578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.009804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.009844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.010143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.010184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.010454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.010524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.010901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.010941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.011193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.011205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.011330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.011342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.011629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.011642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.011789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.011801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.012067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.012079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.012235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.012247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.012513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.012526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.012744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.012756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.012953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.012965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.013230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.013242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.013554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.013567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.013724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.013736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.013894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.013906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.014120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.014132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.014357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.014369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.014518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.014529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.014854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.014894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.015102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.015142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.015416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.015456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.015758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.016123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.016136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.016337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.404 [2024-06-11 14:07:57.016349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.404 qpair failed and we were unable to recover it. 00:40:04.404 [2024-06-11 14:07:57.016615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.016627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.016914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.016926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.017121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.017133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.017437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.017450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.017569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.017582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.017743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.017756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.017957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.017968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.018245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.018257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.018419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.018431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.018568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.018580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.018868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.018881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.019109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.019149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.019525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.019571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.019847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.019887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.020217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.020257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.020544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.020586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.020857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.020896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.021176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.021216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.021423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.021787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.021828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.022142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.022182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.022466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.022516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.022871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.022911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.023213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.023252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.023582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.023624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.023904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.023945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.024167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.024179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.024333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.024345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.024647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.024689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.024968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.025009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.025339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.025351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.025550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.025563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.025771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.025784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.026000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.026013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.026173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.026185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.026390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.026430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.026792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.026833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.027092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.405 [2024-06-11 14:07:57.027104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.405 qpair failed and we were unable to recover it. 00:40:04.405 [2024-06-11 14:07:57.027315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.027327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.027549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.027562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.027708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.027720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.027916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.027929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.028196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.028233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.028607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.028649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.028981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.029020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.029295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.029335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.029703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.029744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.030092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.030104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.030378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.030418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.030724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.030765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.030985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.031025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.031248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.031288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.031590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.031636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.031910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.031950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.032304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.032344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.032562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.032603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.032931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.032971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.033303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.033343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.033566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.033606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.033867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.033908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.034119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.034159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.034431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.034443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.034644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.034656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.034812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.034825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.035070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.035110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.035389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.035429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.035694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.035735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.036084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.036097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.406 [2024-06-11 14:07:57.036217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.406 [2024-06-11 14:07:57.036231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.406 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.036519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.036532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.036767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.036780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.036995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.037008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.037248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.037261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.037553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.037565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.037762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.037774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.037982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.037994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.038232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.038272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.038623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.038664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.039060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.039101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.039334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.039375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.039675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.039716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.039937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.040208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.040219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.040368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.040380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.040582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.040595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.040874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.040886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.041127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.041139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.041422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.041463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.041703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.041743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.042025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.042065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.042358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.042371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.042530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.042543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.042832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.042844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.042982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.042994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.043200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.043213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.043491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.043520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.043740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.043753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.043912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.043923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.044197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.044237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.044436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.044500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.044773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.044814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.045123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.045134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.045403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.045414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.045687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.045700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.045860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.045872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.046136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.046148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.046391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.407 [2024-06-11 14:07:57.046403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.407 qpair failed and we were unable to recover it. 00:40:04.407 [2024-06-11 14:07:57.046592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.046604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.046806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.046818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.046970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.046982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.047953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.047965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.048136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.048149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.048354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.048394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.048654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.048696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.048902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.048916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.049057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.049086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.049369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.049409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.049639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.049680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.049896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.049937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.050209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.050249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.050487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.050500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.050733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.050745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.050883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.050894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.051037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.051066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.051418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.051458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.051742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.051783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.051996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.052037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.052303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.052339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.052496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.052508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.052719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.052759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.053041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.053083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.053396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.053408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.053552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.053564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.053698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.053863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.053875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.054174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.054214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.054419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.054459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.054815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.054856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.055068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.055109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.055446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.408 [2024-06-11 14:07:57.055496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.408 qpair failed and we were unable to recover it. 00:40:04.408 [2024-06-11 14:07:57.055829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.055870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.056151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.056191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.056457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.056508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.056772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.056812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.057019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.057059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.057386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.057398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.057685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.057698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.057850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.058113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.058154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.058515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.058557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.058779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.058820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.058969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.058980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.059212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.059252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.059534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.059573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.059912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.059957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.060235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.060274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.060591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.060602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.060817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.060828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.061112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.061123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.061333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.061344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.061546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.061558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.061688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.061700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.061788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.061799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.062000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.062012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.062281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.062296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.062516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.062530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.062752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.062766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.062921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.062934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.063216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.063232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.063485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.063498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.063696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.063709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.063909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.063921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.064197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.064209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.064361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.064378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.064595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.064608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.064707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.064719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.065033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.065046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.065205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.065217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.409 qpair failed and we were unable to recover it. 00:40:04.409 [2024-06-11 14:07:57.065453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.409 [2024-06-11 14:07:57.065467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.065620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.065637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.065843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.065857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.066053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.066067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.066336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.066351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.066647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.066663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.066810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.066824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.067027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.067041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.067338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.067353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.067556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.067571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.067726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.067741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.067960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.067974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.068112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.068126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.068325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.068612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.068625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.068778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.068793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.068947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.069183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.069198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.069483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.069498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.069728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.069742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.069892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.069906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.070179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.070193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.070489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.070504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.070657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.070671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.070931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.071119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.071133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.071404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.071418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.071634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.071651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.071944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.071964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.072214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.072237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.072469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.072496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.072730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.072749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.072914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.072929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.073063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.073077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.073276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.073291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.073452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.073467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.073704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.073718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.073840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.073854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.410 qpair failed and we were unable to recover it. 00:40:04.410 [2024-06-11 14:07:57.074070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.410 [2024-06-11 14:07:57.074085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.074297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.074311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.074523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.074538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.074751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.074765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.074874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.074888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.075119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.075134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.075338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.075352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.075519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.075532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.075693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.075705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.075856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.075869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.076133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.076145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.076290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.076302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.076518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.076530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.076692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.076704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.076915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.076928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.077138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.077151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.077358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.077370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.077571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.077583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.077728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.077744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.077878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.077891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.078160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.078189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.078457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.078516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.078871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.078911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.079231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.079243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.079443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.079455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.079741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.079753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.079920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.079959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.080179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.080219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.080548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.080589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.080947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.080987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.081249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.081261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.081460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.411 [2024-06-11 14:07:57.081472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.411 qpair failed and we were unable to recover it. 00:40:04.411 [2024-06-11 14:07:57.081606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.081618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.081769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.081782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.082053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.082065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.082368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.082408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.082643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.082685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.082964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.083004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.083214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.083254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.084241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.084265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.084442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.084455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.084697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.084739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.084959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.084999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.085350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.085390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.085684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.085725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.086074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.086115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.086386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.086398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.086692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.086704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.087004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.087016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.087227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.087239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.087393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.087405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.087545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.087557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.087822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.087834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.088083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.088096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.088253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.088265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.088400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.088411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.088677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.088908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.088921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.089133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.089344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.089383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.089700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.089741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.089956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.089997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.090226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.090266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.090536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.090549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.090695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.090707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.091002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.091014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.091256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.091296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.091509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.091551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.091776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.091816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.092142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.412 [2024-06-11 14:07:57.092154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.412 qpair failed and we were unable to recover it. 00:40:04.412 [2024-06-11 14:07:57.092364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.092376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.092518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.092530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.092763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.092775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.092951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.092963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.093232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.093244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.093382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.093394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.093550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.093562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.093759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.093772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.094021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.094176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.094453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.094615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.094792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.094999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.095011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.095250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.095262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.095517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.095529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.095762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.095774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.095919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.095931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.096070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.096082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.096285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.096297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.096437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.096449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.096721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.096733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.096953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.096966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.097112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.097124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.097333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.097345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.097555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.097572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.097744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.097764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.098064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.098089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.098309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.098330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.098513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.098529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.098825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.098839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.099959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.099973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.100122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.100136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.413 [2024-06-11 14:07:57.100282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.413 [2024-06-11 14:07:57.100296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.413 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.100510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.100525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.100666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.100680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.100887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.100902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.101050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.101064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.101265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.101280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.101447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.101462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.101680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.101694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.101971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.102018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.102292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.102340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.102565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.102579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.102848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.102862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.103069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.103084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.103299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.103314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.103450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.103464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.103625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.103640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.103856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.103904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.104171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.104185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.104345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.104359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.104516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.104564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.104939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.104986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.105229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.105243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.105391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.105405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.105574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.105586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.105808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.105820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.106040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.106052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.106143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.106155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.106429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.106806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.106846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.107086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.107127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.107387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.107438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.107586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.107599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.107828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.107840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.108051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.108063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.108290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.108330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.108524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.108565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.108833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.108873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.109170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.109211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.109490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.414 [2024-06-11 14:07:57.109502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.414 qpair failed and we were unable to recover it. 00:40:04.414 [2024-06-11 14:07:57.109719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.109731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.109939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.109951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.110103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.110116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.110340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.110380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.110571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.110612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.110816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.110857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.111134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.111146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.111289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.111445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.111457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.111663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.111675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.111825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.111836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.112048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.112334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.112374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.112636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.112678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.112947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.112987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.113167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.113179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.113409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.113421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.113570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.113583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.113739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.113752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.113990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.114143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.114285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.114516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.114677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.114841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.115025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.115036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.115150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.115179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.115425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.115465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.115773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.115813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.116006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.116046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.116269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.116309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.116610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.415 [2024-06-11 14:07:57.116625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.415 qpair failed and we were unable to recover it. 00:40:04.415 [2024-06-11 14:07:57.116780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.116792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.117088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.117128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.117354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.117394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.117639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.117651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.117881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.117893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.118052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.118286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.118469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.118686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.118832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.118989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.119221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.119375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.119525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.119754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.119909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.119921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.120142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.120155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.120354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.120366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.120561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.120573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.120722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.120734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.120959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.120971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.121168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.121180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.121384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.121396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.121543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.121556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.121835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.121848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.122069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.122081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.122315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.122327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.122490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.122503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.122666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.122679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.122858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.122870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.123134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.123147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.123288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.123301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.123451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.123463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.123629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.123641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.123865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.123878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.124146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.124158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.124293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.124306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.124571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.416 [2024-06-11 14:07:57.124584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.416 qpair failed and we were unable to recover it. 00:40:04.416 [2024-06-11 14:07:57.124795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.124807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.124954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.124968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.125196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.125209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.125372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.125384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.125593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.125606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.125896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.125908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.126156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.126168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.126305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.126318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.126467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.126634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.126646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.126807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.126819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.127044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.127085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.127295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.127335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.127534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.127575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.127786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.127826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.128105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.128145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.128339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.128351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.128495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.128508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.128710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.128722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.128952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.128965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.129098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.129110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.129313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.129325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.129593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.129605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.129808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.129821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.130034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.130047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.130207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.130219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.130433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.130446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.130603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.130615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.130885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.130898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.131106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.131117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.131383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.131395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.131662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.131674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.131882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.132029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.132040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.132304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.132317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.132469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.132487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.132704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.132716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.132917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.132929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.417 [2024-06-11 14:07:57.133209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.417 [2024-06-11 14:07:57.133284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.417 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.133657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.133702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.133913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.134138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.134153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.134431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.134822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.134862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.135142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.135183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.135397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.135410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.135562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.135791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.135803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.136072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.136084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.136280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.136292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.136457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.136468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.136618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.136631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.136812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.136824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.137029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.137067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.137413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.137453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.137731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.137744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.137903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.137915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.138129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.138141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.138365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.138377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.138691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.138704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.138919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.138932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.139163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.139175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.139333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.139371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.139602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.139642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.139911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.139952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.140167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.140206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.140459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.140471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.140751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.140791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.141100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.141141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.141362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.141403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.141617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.141659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.141865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.141904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.142236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.142277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.142576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.142588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.143595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.143620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.143836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.143849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.418 [2024-06-11 14:07:57.144060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.418 [2024-06-11 14:07:57.144072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.418 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.144275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.144288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.144495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.144507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.144708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.144721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.144938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.144979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.145194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.145242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.145566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.145578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.145799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.145811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.146131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.146144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.146263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.146275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.146494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.146507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.146757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.146770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.146916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.146928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.147142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.147154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.147358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.147370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.147601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.147902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.147915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.148114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.148126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.148246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.148258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.148568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.148609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.148837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.148878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.149145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.149185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.149445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.149496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.149851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.149891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.150066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.150107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.150384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.150425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.150748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.150979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.151019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.151280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.151320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.151540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.151553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.151760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.151772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.152040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.152052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.152202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.152215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.152417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.152429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.152646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.152658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.152950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.152962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.153093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.153106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.153252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.153265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.153474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.419 [2024-06-11 14:07:57.153491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.419 qpair failed and we were unable to recover it. 00:40:04.419 [2024-06-11 14:07:57.153768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.153780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.153919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.153931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.154084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.154096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.154250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.154263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.154528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.154541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.154769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.154782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.154933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.154947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.155092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.155104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.155264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.155277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.155481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.155494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.155760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.155772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.155909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.156134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.156145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.156302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.156314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.156474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.156490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.156773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.156813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.157087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.157127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.157415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.157455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.157682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.157724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.158008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.158047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.158307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.158319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.158517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.158530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.158731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.158743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.158886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.158899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.159041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.159052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.159253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.159265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.159416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.159429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.159723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.159963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.160003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.160290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.160335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.160492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.420 [2024-06-11 14:07:57.160717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.420 [2024-06-11 14:07:57.160758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.420 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.161018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.161372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.161446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.161685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.161729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.162014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.162054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.162358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.162398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.162655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.162696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.163029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.163069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.163370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.163636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.163676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.163933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.163947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.164090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.164102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.164389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.164401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.164599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.164612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.164760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.164772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.164914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.164926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.165139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.165152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.165311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.165323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.165456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.165468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.165630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.165642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.165864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.165904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.166165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.166206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.166366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.166393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.166617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.166629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.166858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.166871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.167091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.167103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.167255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.167267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.167428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.167463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.167680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.167721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.167948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.167988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.168251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.168291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.168493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.168521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.168732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.421 [2024-06-11 14:07:57.168745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.421 qpair failed and we were unable to recover it. 00:40:04.421 [2024-06-11 14:07:57.168885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.168897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.169107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.169119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.169287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.169300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.169507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.169520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.169722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.169734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.169902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.169914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.170068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.170081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.170281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.170294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.170512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.170525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.170725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.170739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.170882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.170895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.171038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.171263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.171275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.171564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.171576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.171776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.171788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.171941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.171953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.172106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.172118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.172317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.172330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.172613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.172627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.172826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.172838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.172996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.173009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.173230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.173270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.173536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.173577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.173784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.173825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.174080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.174120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.174389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.174401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.174602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.174615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.174724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.174737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.174895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.174907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.175120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.175134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.175292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.175305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.175440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.175453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.175591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.175603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.175894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.175906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.176055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.176067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.176234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.176246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.176429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.176469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.176771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.422 [2024-06-11 14:07:57.176811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.422 qpair failed and we were unable to recover it. 00:40:04.422 [2024-06-11 14:07:57.177049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.177089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.177295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.177335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.177548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.177594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.178157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.178177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.178421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.178434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.178639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.178652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.178808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.178820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.179964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.179977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.180196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.180210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.180461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.180481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.180763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.180776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.180979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.180992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.181195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.181207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.181474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.181491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.181705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.181718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.181983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.181995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.182211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.182224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.182453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.182508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.182804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.182844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.183025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.183066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.183286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.183299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.183539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.183551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.183671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.183683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.183845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.183857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.184002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.184014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.184241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.184254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.184575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.184616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.184830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.184871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.185146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.185187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.185335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.185347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.185564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.185577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.185749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.185762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.185999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.186039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.423 qpair failed and we were unable to recover it. 00:40:04.423 [2024-06-11 14:07:57.186238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.423 [2024-06-11 14:07:57.186280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.186551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.186593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.186861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.186901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.187167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.187207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.187493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.187527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.187739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.187752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.187909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.187922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.188078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.188090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.188322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.188334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.188526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.188538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.188703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.188715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.188930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.188970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.189344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.189385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.189649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.189663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.189805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.189817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.189967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.189979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.190113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.190125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.190289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.190301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.190595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.190636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.190899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.190939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.191142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.191182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.191367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.191379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.191520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.191533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.191676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.191914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.191954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.192161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.192202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.192398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.192410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.192630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.192643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.192860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.192873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.193021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.193033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.193189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.193201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.193329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.193342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.193526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.193538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.193748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.193761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.194050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.194062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.194222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.194234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.194439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.194451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.194605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.194618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.424 [2024-06-11 14:07:57.194820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.424 [2024-06-11 14:07:57.194832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.424 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.195028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.195040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.195334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.195345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.195494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.195505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.195667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.195680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.195896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.195908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.196105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.196117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.196266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.196278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.196422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.196434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.196651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.196693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.196964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.197004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.197262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.197302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.197575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.197616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.197821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.197860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.198133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.198173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.198370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.198384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.198664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.198705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.199031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.199071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.199285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.199325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.199588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.199628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.199751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.199763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.199913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.199926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.200157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.200168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.200301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.200313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.200539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.200551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.200767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.200807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.201071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.201112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.201389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.201431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.201586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.201598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.201802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.201815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.201984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.201997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.202198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.202210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.202426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.202438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.202642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.202655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.202764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.202775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.202977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.202989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.203133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.203145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.203291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.203304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.203601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.425 [2024-06-11 14:07:57.203642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.425 qpair failed and we were unable to recover it. 00:40:04.425 [2024-06-11 14:07:57.203861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.203901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.204183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.204224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.204406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.204567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.204579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.204729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.204742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.205016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.205056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.205268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.205308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.205585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.205626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.205891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.205932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.206196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.206236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.206517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.206529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.206679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.206692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.206828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.206841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.206980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.206992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.207286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.207597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.207638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.207825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.207839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.208014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.208054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.208255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.208295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.208624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.208665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.208924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.208964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.209130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.209170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.209425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.209437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.209594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.209607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.209841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.209853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.209985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.209997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.210271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.210283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.210427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.210438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.210662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.210674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.210871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.210883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.211042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.211055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.426 qpair failed and we were unable to recover it. 00:40:04.426 [2024-06-11 14:07:57.211265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.426 [2024-06-11 14:07:57.211277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.211510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.211551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.211760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.211801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.212062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.212102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.212377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.212417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.212637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.212678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.212957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.212997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.213275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.213315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.213469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.213493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.213769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.213781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.214007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.214046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.214323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.214363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.214648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.214691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.214901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.214913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.215207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.215219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.215363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.215375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.215591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.215632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.215896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.216145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.216185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.216388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.216400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.216621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.216633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.216844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.216884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.217163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.217202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.217416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.217428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.217592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.217605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.217753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.217767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.217869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.217881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.218017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.218030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.218163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.218175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.218402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.218442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.218650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.218901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.218941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.219309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.219353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.219638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.219651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.219868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.219881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.220148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.220160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.220358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.220370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.220609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.220622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.427 [2024-06-11 14:07:57.220760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.427 [2024-06-11 14:07:57.220772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.427 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.220976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.220989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.221206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.221218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.221416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.221428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.221654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.221983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.221995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.222143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.222155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.222417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.222429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.222579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.222592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.222791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.222803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.222916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.222928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.223141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.223154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.223290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.223302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.223551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.223592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.223800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.223841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.224218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.224258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.224633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.224646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.224802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.224814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.225026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.225038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.225178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.225190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.225418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.225467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.225778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.225818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.226026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.226066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.226353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.226393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.226652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.226665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.226978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.226990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.227131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.227143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.227363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.227409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.227697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.228093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.228133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.228431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.228472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.228698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.228711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.228928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.228968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.229297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.229337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.229636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.229649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.229864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.229876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.230036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.230049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.230189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.230201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.230415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.428 [2024-06-11 14:07:57.230428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.428 qpair failed and we were unable to recover it. 00:40:04.428 [2024-06-11 14:07:57.230661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.230674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.230892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.231117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.231130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.231333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.231345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.231585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.231598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.231751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.231764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.232035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.232074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.232284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.232324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.232549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.232562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.232763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.232776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.233063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.233076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.233223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.233235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.233389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.233401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.233621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.233634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.233849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.233890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.234105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.234145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.234429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.234468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.234745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.234786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.235006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.235018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.235237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.235249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.235548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.235561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.235850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.235863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.236060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.236072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.236236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.236248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.236395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.236407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.236614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.236627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.236825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.236837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.237061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.237073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.237270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.237504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.237533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.237667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.237679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.237891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.237903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.238139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.238151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.238291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.238302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.238503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.238516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.238676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.238688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.238831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.238843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.239048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.239098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.429 qpair failed and we were unable to recover it. 00:40:04.429 [2024-06-11 14:07:57.239365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.429 [2024-06-11 14:07:57.239405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.239617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.239659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.239913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.239924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.240141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.240153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.240306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.240319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.240517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.240530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.240748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.240760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.240869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.240884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.241101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.241114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.241370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.241382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.241516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.241528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.241728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.241740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.241962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.242002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.242259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.242300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.242526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.242567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.242782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.242823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.243169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.243208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.243498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.243539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.243792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.243804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.244052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.244064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.244332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.244345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.244559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.244572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.244824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.244864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.245086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.245126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.245319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.245358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.245664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.245677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.245813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.245825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.246112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.246152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.246362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.246402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.246690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.246731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.246996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.247042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.247246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.430 [2024-06-11 14:07:57.247286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.430 qpair failed and we were unable to recover it. 00:40:04.430 [2024-06-11 14:07:57.247595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.247608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.247807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.247819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.248096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.248108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.248289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.248329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.248656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.248697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.248916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.248928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.249928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.249975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.250311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.250351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.250754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.250795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.251063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.251103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.251388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.251428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.251681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.251694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.251965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.251988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.252257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.252297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.252556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.252569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.252716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.252728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.252946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.252986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.253202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.253242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.253438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.253511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.253803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.253815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.254031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.254044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.254239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.254251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.254459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.254471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.254761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.254802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.255068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.255330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.255369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.255640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.255682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.255975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.255987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.256143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.256155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.256420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.256432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.256648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.256661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.256818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.431 [2024-06-11 14:07:57.256830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.431 qpair failed and we were unable to recover it. 00:40:04.431 [2024-06-11 14:07:57.257061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.257101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.257377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.257686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.257699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.257943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.257955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.258199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.258211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.258498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.258510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.258673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.258685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.258927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.258939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.259220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.259260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.259533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.259575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.259745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.259785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.260004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.260017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.260238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.260250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.260538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.260550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.260714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.260726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.260949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.260961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.261205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.261364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.261377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.261590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.261603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.261847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.261887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.262101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.262141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.262352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.262392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.262761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.262802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.263106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.263146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.263376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.263416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.263657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.263698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.263957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.263970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.264236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.264248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.264448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.264459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.264624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.264637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.264788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.264801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.264951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.264963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.265232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.265244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.265520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.265532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.265682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.265695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.265910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.265922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.266072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.266084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.432 [2024-06-11 14:07:57.266358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.432 [2024-06-11 14:07:57.266370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.432 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.266583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.266595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.266795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.266807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.266970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.266983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.267141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.267155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.267292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.267304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.267548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.267588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.267917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.267957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.268287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.268326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.268618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.268631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.268847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.268887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.269080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.269120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.269316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.269356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.269698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.269711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.269979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.269992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.270132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.270144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.270360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.270372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.270526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.270538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.270774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.270797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.271021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.271033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.271307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.271319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.271532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.271545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.271773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.271785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.271959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.271971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.272130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.272143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.272282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.272293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.272453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.272465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.272723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.272736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.272944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.272956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.273100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.273113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.273344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.273381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.273599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.273612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.273822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.273863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.274143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.274183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.274403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.274452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.274681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.274705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.274900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.274912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.275140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.275180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.433 qpair failed and we were unable to recover it. 00:40:04.433 [2024-06-11 14:07:57.275509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.433 [2024-06-11 14:07:57.275551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.275840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.275852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.276123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.276136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.276417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.276430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.276713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.276726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.276957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.276997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.277348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.277394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.277679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.277721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.277927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.277940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.278207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.278219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.278417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.278430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.278698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.278711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.278891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.278903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.279104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.279117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.279336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.279349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.279499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.279511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.279778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.279790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.279946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.279958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.280224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.280236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.280448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.280461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.280666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.280679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.280954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.280994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.281327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.281368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.281722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.281763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.282037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.282077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.282513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.282553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.282797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.282809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.282955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.282967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.283210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.283223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.283506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.283519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.283772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.283785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.284070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.284082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.284370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.284383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.284550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.284563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.284876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.284888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.285112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.285152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.285445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.285496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.285773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.434 [2024-06-11 14:07:57.285813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.434 qpair failed and we were unable to recover it. 00:40:04.434 [2024-06-11 14:07:57.286159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.286200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.286539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.286580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.286785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.286824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.287096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.287136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.287511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.287552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.287833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.287873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.288152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.288192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.288471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.288488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.288623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.288637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.288786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.288798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.288965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.288977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.289189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.289202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.289351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.289363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.289588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.289600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.289825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.289866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.290140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.290180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.290446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.290518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.290743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.290755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.290907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.290919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.291072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.291084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.291313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.291326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.291525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.291538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.291684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.291697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.291908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.291920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.292131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.292143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.292445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.292494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.292718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.292758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.292949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.292989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.293232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.293245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.435 [2024-06-11 14:07:57.293455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.435 [2024-06-11 14:07:57.293467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.435 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.293689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.293702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.293930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.293942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.294097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.294109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.294261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.294273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.294507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.294520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.294682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.294696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.294912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.294925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.295066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.295078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.295294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.295306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.295445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.295456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.295704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.295717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.295908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.295920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.296210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.296222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.296455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.296467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.296630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.296840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.296852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.297055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.297067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.297278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.297290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.297437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.297449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.297806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.713 [2024-06-11 14:07:57.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.713 qpair failed and we were unable to recover it. 00:40:04.713 [2024-06-11 14:07:57.298055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.298095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.298371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.298412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.298679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.298692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.298907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.298919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.299118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.299130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.299394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.299406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.299546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.299559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.299851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.299863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.300022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.300034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.300187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.300200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.300372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.300384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.300548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.300560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.300867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.300908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.301174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.301214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.301488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.301530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.301801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.301847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.302961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.302974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.303188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.303201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.303396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.303408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.303608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.303621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.303718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.303732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.303940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.303952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.714 qpair failed and we were unable to recover it. 00:40:04.714 [2024-06-11 14:07:57.304220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.714 [2024-06-11 14:07:57.304232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.304386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.304398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.304629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.304670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.304875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.304915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.305189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.305228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.305557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.305598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.305874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.305915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.306125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.306165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.306435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.306475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.306763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.306775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.307057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.307069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.307287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.307300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.307547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.307560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.307711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.307724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.307928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.307941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.308084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.308095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.308376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.308720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.308762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.309123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.309135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.309447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.309497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.309738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.309791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.310003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.310016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.310160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.310172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.310438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.310450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.310607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.310620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.310907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.310920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.311134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.311146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.311368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.311379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.715 [2024-06-11 14:07:57.311532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.715 [2024-06-11 14:07:57.311543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.715 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.311797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.311838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.312117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.312157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.312428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.312467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.312768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.312807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.313017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.313057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.313424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.313464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.313739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.313752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.313974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.313987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.314203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.314215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.314426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.314441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.314583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.314595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.314737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.314749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.314895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.314908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.315210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.315250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.315443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.315494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.315765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.315805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.316011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.316051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.316409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.316449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.316668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.316709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.317031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.317044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.317192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.317204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.317491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.317503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.317775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.317787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.318084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.318097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.318326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.318338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.318587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.318600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.716 qpair failed and we were unable to recover it. 00:40:04.716 [2024-06-11 14:07:57.318813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.716 [2024-06-11 14:07:57.318825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.319036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.319048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.319177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.319190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.319407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.319418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.319647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.319659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.319860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.319873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.320942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.320954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.321164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.321176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.321466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.321484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.321642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.321654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.321949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.321961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.322183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.322195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.322353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.322405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.322788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.322829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.323125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.323137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.323350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.323362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.323627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.323640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.323784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.323796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.324069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.324114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.324391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.324431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.324719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.324760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.325035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.325075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.717 qpair failed and we were unable to recover it. 00:40:04.717 [2024-06-11 14:07:57.325335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.717 [2024-06-11 14:07:57.325375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.325580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.325621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.325887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.325927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.326171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.326183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.326517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.326530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.326769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.326782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.326959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.326972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.327203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.327215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.327429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.327441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.327600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.327613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.327888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.327928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.328196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.328236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.328569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.328615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.328837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.328849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.329077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.329234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.329398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.329553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.329779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.329990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.330030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.330302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.330602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.330643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.330922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.330962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.331230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.331269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.331532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.331574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.331926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.332228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.718 [2024-06-11 14:07:57.332268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.718 qpair failed and we were unable to recover it. 00:40:04.718 [2024-06-11 14:07:57.332474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.332527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.332849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.332862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.333106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.333118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.333362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.333374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.333528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.333540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.333760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.333772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.333969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.333981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.334285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.334297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.334513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.334526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.334753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.334767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.335095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.335136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.335415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.335454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.335795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.335836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.336004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.336044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.336380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.336420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.336712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.336753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.336968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.337008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.719 [2024-06-11 14:07:57.337253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.719 [2024-06-11 14:07:57.337265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.719 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.337530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.337543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.337777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.337789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.338054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.338067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.338266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.338278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.338494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.338508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.338742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.338782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.339058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.339098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.339385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.339425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.339693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.339735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.340002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.340042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.340375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.340415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.340639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.340680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.341010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.341023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.341330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.341343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.341554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.341567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.341769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.341782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.341984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.341997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.342167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.342180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.342330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.342342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.342559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.342573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.342839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.342851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.343057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.343069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.343275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.343287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.343531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.343744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.343757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.343909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.343922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.344151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.720 [2024-06-11 14:07:57.344191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.720 qpair failed and we were unable to recover it. 00:40:04.720 [2024-06-11 14:07:57.344421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.344461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.344814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.344826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.345062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.345074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.345351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.345364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.345538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.345552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.345697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.345709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.345911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.345923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.346148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.346161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.346361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.346373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.346647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.346688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.347034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.347074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.347231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.347243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.347557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.347569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.347716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.347729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.348018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.348249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.348424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.348598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.348836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.348998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.349011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.349226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.349239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.349442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.349454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.349614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.349626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.349826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.349839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.350048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.350061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.350303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.721 [2024-06-11 14:07:57.350343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.721 qpair failed and we were unable to recover it. 00:40:04.721 [2024-06-11 14:07:57.350677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.350718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.351007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.351019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.351171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.351183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.351362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.351375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.351617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.351659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.351938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.351978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.352165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.352205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.352533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.352574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.352918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.352958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.353193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.353205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.353493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.353506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.353792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.353805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.354117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.354129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.354339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.354351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.354503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.354515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.354650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.354661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.354824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.354836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.355050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.355063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.355206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.355220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.355378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.355390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.355602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.355615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.355885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.355897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.356130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.356141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.356373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.356385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.356663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.356676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.356877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.356888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.357194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.357206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.357382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.722 [2024-06-11 14:07:57.357394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.722 qpair failed and we were unable to recover it. 00:40:04.722 [2024-06-11 14:07:57.357592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.357604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.357873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.357885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.358191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.358202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.358470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.358487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.358789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.358802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.359956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.359968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.360101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.360113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.360290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.360302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.360569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.360582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.360806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.360962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.360973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.361144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.361185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.361609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.361687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.361970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.362014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.362286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.362327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.362691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.362737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.362968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.362981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.363248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.363261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.363543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.363555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.363822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.363835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.363994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.364006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.364271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.364283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.364431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.364443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.364674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.364688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.723 qpair failed and we were unable to recover it. 00:40:04.723 [2024-06-11 14:07:57.365003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.723 [2024-06-11 14:07:57.365038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.365370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.365420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.365795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.365807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.366090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.366102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.366303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.366314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.366522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.366535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.366825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.366837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.367066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.367107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.368299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.368322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.368647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.368660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.368891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.368903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.369131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.369348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.369360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.369600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.369612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.369832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.369844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.370018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.370030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.370164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.370175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.370453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.370465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.370777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.370820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.371004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.371046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.371325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.371364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.371629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.371670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.371943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.371955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.372158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.372170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.372310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.372322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.372564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.372604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.372930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.372943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.373284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.373324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.373622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.373673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.373895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.373936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.374213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.374253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.374588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.374630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.374930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.374973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.375187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.375227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.375502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.375768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.375809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.376101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.376141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.376465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.376518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.724 [2024-06-11 14:07:57.376794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.724 [2024-06-11 14:07:57.376834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.724 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.377109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.377122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.377392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.377404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.377669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.377682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.377896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.377909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.378058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.378070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.378282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.378295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.378427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.378439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.378678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.378690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.378905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.378918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.379126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.379138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.379405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.379417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.379623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.379635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.379793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.379805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.380115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.380127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.380275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.380286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.380504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.380544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.380828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.380841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.381056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.381068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.381277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.381289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.381509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.381733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.381746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.382066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.382106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.382381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.382420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.382754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.382795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.383135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.383148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.383308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.383320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.383523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.383535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.383676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.383727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.383998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.384039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.384266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.384312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.384617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.384658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.725 [2024-06-11 14:07:57.385002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.725 [2024-06-11 14:07:57.385013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.725 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.385257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.385270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.385501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.385514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.385750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.385762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.385924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.385936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.386177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.386218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.386539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.386866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.386878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.387088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.387100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.387308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.387320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.387652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.387664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.387892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.387932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.388271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.388312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.388688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.388730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.389102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.389114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.389278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.389289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.389513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.389525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.389845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.389858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.390099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.390111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.390271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.390283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.390495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.390508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.390670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.390682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.390967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.390979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.391131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.391142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.391397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.391410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.391560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.391573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.391806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.391818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.392109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.392121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.392351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.392362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.392513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.392525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.392748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.392760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.392928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.392940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.393207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.393219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.393485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.393498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.393645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.393656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.393877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.393889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.394040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.394053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.394260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.394273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.394423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.726 [2024-06-11 14:07:57.394436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.726 qpair failed and we were unable to recover it. 00:40:04.726 [2024-06-11 14:07:57.394645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.394657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.394963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.394975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.395139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.395151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.395368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.395380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.395525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.395537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.395746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.395758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.395929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.395941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.396094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.396106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.396320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.396332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.396561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.396574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.396783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.396795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.396939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.396950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.397182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.397195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.397348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.397360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.397631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.397644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.397855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.397867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.398002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.398013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.398160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.398172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.398313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.398325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.398482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.398494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.398790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.398802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.399026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.399038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.399250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.399262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.399411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.399698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.399710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.399856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.399867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.400078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.400091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.400240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.400252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.400353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.400364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.400567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.400580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.400897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.400909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.401067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.401079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.401230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.401242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.401460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.401472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.401687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.401699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.401987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.401999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.402212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.402223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.402436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.727 [2024-06-11 14:07:57.402447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.727 qpair failed and we were unable to recover it. 00:40:04.727 [2024-06-11 14:07:57.402665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.402678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.402969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.402982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.403194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.403206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.403350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.403362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.403526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.403539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.403781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.403792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.403947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.403959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.404182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.404485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.404498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.404711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.404723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.404988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.405000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.405132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.405144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.405368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.405380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.405647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.405659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.405807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.405818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.406043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.406055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.406321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.406334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.406533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.406545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.406751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.406763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.406921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.406933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.407137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.407149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.407416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.407428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.407625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.407637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.407785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.407797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.408114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.408127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.408277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.408288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.408502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.408513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.408671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.408683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.408887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.408899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.409188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.409200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.409368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.409380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.409529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.409541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.409689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.409701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.409850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.409890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.410125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.410165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.410495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.410537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.410802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.410842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.411186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.411199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.728 [2024-06-11 14:07:57.411362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.728 [2024-06-11 14:07:57.411374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.728 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.411640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.411652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.411884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.411925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.412219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.412264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.412557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.412598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.412874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.412914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.413180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.413192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.413341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.413353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.413584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.413597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.413820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.413833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.414074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.414087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.414318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.414330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.414593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.414606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.414815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.414828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.414993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.415005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.415173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.415186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.415320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.415332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.415502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.415514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.415802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.415814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.416036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.416280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.416502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.416727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.416845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.416993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.417005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.417217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.417230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.417430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.417443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.417660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.417673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.417823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.417836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.417998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.418011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.418332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.418345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.418582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.418595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.418792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.418804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.419027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.419039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.419196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.419208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.419417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.419428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.419719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.419732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.729 qpair failed and we were unable to recover it. 00:40:04.729 [2024-06-11 14:07:57.419887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.729 [2024-06-11 14:07:57.419899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.420064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.420076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.420215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.420226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.420363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.420375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.420504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.420516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.420803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.420816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.421096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.421110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.421321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.421332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.421475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.421491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.421758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.421770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.421982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.421995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.422134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.422146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.422312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.422324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.422551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.422563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.422697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.422709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.422918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.422930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.423160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.423393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.423404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.423606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.423618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.423820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.423832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.423980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.423993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.424140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.424349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.424361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.424515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.424528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.424746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.424759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.424959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.424971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.425119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.425130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.425395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.425408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.425705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.425718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.425914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.425926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.426167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.426179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.426316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.426327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.426618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.426631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.426857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.730 [2024-06-11 14:07:57.426869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.730 qpair failed and we were unable to recover it. 00:40:04.730 [2024-06-11 14:07:57.427099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.427112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.427266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.427278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.427495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.427537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.427831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.427871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.428130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.428467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.428518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.428796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.428809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.429099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.429111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.429326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.429338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.429627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.429639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.429914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.429926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.430130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.430142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.430341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.430355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.430569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.430581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.430861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.430874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.431079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.431090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.431225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.431236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.431531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.431572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.431844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.431885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.432187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.432199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.432343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.432355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.432484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.432496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.432765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.432778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.432997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.433009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.433302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.433342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.433555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.433597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.433816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.433857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.434065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.434077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.434367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.434379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.434502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.434520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.434691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.434704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.434919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.434931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.435142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.435182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.435536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.435593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.435907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.435970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.436290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.436313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.436567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.436856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.436870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.731 [2024-06-11 14:07:57.437074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.731 [2024-06-11 14:07:57.437087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.731 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.437300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.437312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.437465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.437698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.437710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.437916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.437928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.438127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.438138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.438336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.438348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.438587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.438600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.438819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.438831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.439060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.439229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.439241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.439456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.439467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.439700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.439713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.439978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.439990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.440141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.440155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.440384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.440396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.440600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.440613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.440825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.440837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.440973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.440984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.441134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.441371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.441383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.441696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.441708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.441922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.441934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.442136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.442147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.442421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.442433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.442592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.442604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.442842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.442854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.443080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.443091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.443327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.443367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.443636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.443678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.443959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.443998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.444288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.444300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.444567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.444580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.444803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.444815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.444964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.444976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.445271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.445311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.445587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.445628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.445923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.445964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.732 [2024-06-11 14:07:57.446199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.732 [2024-06-11 14:07:57.446211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.732 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.446419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.446431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.446532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.446544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.446754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.446767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.447014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.447026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.447247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.447259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.447472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.447498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.447710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.447722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.447921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.447967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.448235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.448275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.448578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.448620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.449007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.449047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.449334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.449346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.449562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.449811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.449823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.449984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.449996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.450218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.450264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.450493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.450535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.450869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.450910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.451260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.451300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.451633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.451674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.451971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.451983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.452158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.452170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.452377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.452400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.452691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.452704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.452969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.452980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.453224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.453236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.453520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.453532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.453825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.453837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.454000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.454012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.454216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.454228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.454377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.454389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.454556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.454568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.454784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.454797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.455769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.455791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.456041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.456055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.456217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.456229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.733 qpair failed and we were unable to recover it. 00:40:04.733 [2024-06-11 14:07:57.456381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.733 [2024-06-11 14:07:57.456393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.456689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.456702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.456976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.457015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.457345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.457373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.457600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.457612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.457812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.457825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.457989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.458157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.458169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.458372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.458384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.458585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.458606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.458841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.458884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.459107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.459147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.459414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.459426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.459636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.459649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.459798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.459811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.459956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.459968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.460106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.460119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.460330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.460342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.460542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.460760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.460775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.460916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.460928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.461158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.461170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.461375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.461388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.461593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.461606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.461768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.461780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.461926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.461939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.462137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.462150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.462345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.462357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.734 qpair failed and we were unable to recover it. 00:40:04.734 [2024-06-11 14:07:57.462649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.734 [2024-06-11 14:07:57.462661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.462949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.462962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.463103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.463114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.463201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.463213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.463368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.463381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.463566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.463578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.463791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.463803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.464068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.464080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.464226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.464238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.464505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.464518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.464654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.464666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.464812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.464824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.465091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.465316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.465328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.465595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.465607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.465921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.465933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.466093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.466105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.466250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.466262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.466462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.466474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.466783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.466939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.466951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.467218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.467231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.467444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.467456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.467755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.467768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.467982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.467995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.468272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.468284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.468493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.468505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.468720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.468732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.468943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.468956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.469164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.469176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.469322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.469334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.469475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.469499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.469714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.469727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.469940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.469951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.470187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.470399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.735 [2024-06-11 14:07:57.470412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.735 qpair failed and we were unable to recover it. 00:40:04.735 [2024-06-11 14:07:57.470616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.470628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.470780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.470792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.470943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.470956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.471214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.471226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.471385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.471397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.471622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.471634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.471849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.471861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.472071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.472083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.472358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.472371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.472657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.472669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.472817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.472829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.472969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.472981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.473276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.473289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.473566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.473578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.473814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.473826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.474090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.474102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.474320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.474333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.474550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.474562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.474699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.474711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.474843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.475097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.475109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.475376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.475388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.475520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.475533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.475693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.475705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.475841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.475853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.476051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.476063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.476328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.476340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.476569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.476581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.476849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.476862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.477012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.477024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.477221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.477470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.477497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.477660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.477671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.477935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.736 [2024-06-11 14:07:57.477948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.736 qpair failed and we were unable to recover it. 00:40:04.736 [2024-06-11 14:07:57.478148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.478160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.478312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.478327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.478548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.478560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.478760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.478771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.479046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.479086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.479441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.479491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.479793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.479840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.480037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.480049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.480209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.480222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.480389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.480402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.480634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.480676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.480905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.480945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.481214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.481254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.481488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.481809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.481850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.482129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.482169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.482498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.482539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.482913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.482953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.483197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.483210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.483424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.483437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.483654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.483667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.483948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.483960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.484161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.484173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.484302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.484314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.484535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.484548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.484812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.484824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.484991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.485003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.485281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.485293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.485558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.485572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.485774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.485786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.486054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.486066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.486211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.486223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.486511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.486524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.486749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.486761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.486910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.486922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.737 qpair failed and we were unable to recover it. 00:40:04.737 [2024-06-11 14:07:57.487123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.737 [2024-06-11 14:07:57.487135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.487391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.487404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.487679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.487692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.487962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.487974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.488196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.488208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.488495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.488508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.488753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.488765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.488929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.488941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.489171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.489183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.489450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.489462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.489697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.489918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.489930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.490134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.490146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.490411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.490424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.490638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.490650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.490918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.490931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.491136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.491148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.491370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.491382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.491542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.491555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.491839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.491851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.492053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.492066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.492175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.492187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.492386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.492398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.492689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.492702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.492913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.492925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.493147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.493160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.493253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.493265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.493484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.493498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.493762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.493774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.493984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.493996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.494211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.494224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.494515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.494528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.494680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.494692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.738 qpair failed and we were unable to recover it. 00:40:04.738 [2024-06-11 14:07:57.494972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.738 [2024-06-11 14:07:57.494986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.495296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.495309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.495520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.495533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.495774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.495786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.496073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.496085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.496305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.496317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.496613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.496625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.496841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.496853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.497062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.497074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.497315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.497327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.497496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.497508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.497640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.497652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.497866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.497879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.498039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.498052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.498250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.498263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.498469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.498485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.498638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.498650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.498881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.498894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.499110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.499123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.499257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.499269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.499424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.499436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.499604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.499617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.499758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.499770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.500050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.500062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.500277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.500290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.500506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.500519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.500680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.500691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.500914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.500927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.501062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.739 [2024-06-11 14:07:57.501074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.739 qpair failed and we were unable to recover it. 00:40:04.739 [2024-06-11 14:07:57.501273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.501285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.501567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.501581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.501872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.501885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.501984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.501995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.502262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.502275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.502572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.502585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.502805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.502817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.503081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.503093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.503252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.503264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.503548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.503561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.503723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.503736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.504008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.504023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.504223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.504236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.504456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.504701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.504713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.505053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.505065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.505342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.505354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.505577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.505869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.505881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.506079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.506091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.506356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.506368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.506583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.506595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.506798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.506811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.507881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.507892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.508054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.508067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.508409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.508449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.508760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.740 [2024-06-11 14:07:57.508800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.740 qpair failed and we were unable to recover it. 00:40:04.740 [2024-06-11 14:07:57.509012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.509052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.509412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.509459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.509668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.509681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.509893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.509906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.510118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.510131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.510342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.510354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.510643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.510655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.510853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.510865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.511025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.511038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.511338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.511379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.511610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.511651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.511934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.511974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.512241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.512539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.512552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.512681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.512693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.512837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.512849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.513137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.513149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.513364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.513377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.513599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.513612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.513882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.513897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.514066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.514078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.514371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.514384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.514536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.514548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.514764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.514804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.515083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.515122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.515341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.515628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.515669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.515967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.516006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.516332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.516373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.516647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.516689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.516966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.517006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.517300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.517340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.517697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.517739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.518027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.518068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.518370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.518404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.741 [2024-06-11 14:07:57.518570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.741 [2024-06-11 14:07:57.518583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.741 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.518796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.519032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.519044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.519328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.519372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.519654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.519696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.519909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.519949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.520227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.520268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.520548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.520575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.520724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.520737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.520901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.520913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.521132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.521145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.521356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.521533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.521545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.521754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.521766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.522044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.522084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.522303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.522315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.522525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.522537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.522825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.522838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.523057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.523070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.523335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.523556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.523568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.523778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.523791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.524038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.524078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.524296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.524597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.524645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.524908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.524948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.525146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.525159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.525456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.525541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.525763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.525804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.526086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.526126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.526378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.526390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.526655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.526668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.526895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.526908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.527177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.527189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.527355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.742 [2024-06-11 14:07:57.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.742 qpair failed and we were unable to recover it. 00:40:04.742 [2024-06-11 14:07:57.527522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.527534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.527753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.527765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.527965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.527977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.528271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.528284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.528444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.528455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.528604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.528618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.528819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.528831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.529052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.529064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.529246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.529285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.529496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.529536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.529899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.529939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.530237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.530467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.530484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.530639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.530653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.530920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.530960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.531249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.531261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.531468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.531486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.531768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.531780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.531984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.531997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.532198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.532211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.532392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.532404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.532547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.532559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.532737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.532749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.532845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.532857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.533064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.533076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.533168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.533179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.533442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.533689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.533702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.533910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.533922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.534164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.534179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.534397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.534409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.534608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.534620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.534821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.534834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.534973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.534985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.535132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.535144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.535293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.535306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.535450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.743 [2024-06-11 14:07:57.535463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.743 qpair failed and we were unable to recover it. 00:40:04.743 [2024-06-11 14:07:57.535741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.535754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.535908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.535920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.536052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.536065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.536218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.536230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.536434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.536446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.536654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.536666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.536960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.537219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.537260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.537614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.537655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.537919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.537958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.538178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.538218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.538558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.538571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.538705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.538717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.538882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.538895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.539033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.539045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.539268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.539317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.539593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.539634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.539894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.539933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.540211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.540260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.540461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.540474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.540579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.540591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.540820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.540860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.541141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.541181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.541451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.541463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.541605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.541618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.541768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.541780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.541977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.541989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.542123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.542137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.542352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.744 [2024-06-11 14:07:57.542364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.744 qpair failed and we were unable to recover it. 00:40:04.744 [2024-06-11 14:07:57.542661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.542674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.542815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.542827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.542970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.542982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.543215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.543229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.543371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.543383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.543604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.543645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.543974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.544015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.544344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.544356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.544505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.544517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.544771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.544811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.545094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.545134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.545464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.545479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.545647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.545659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.545861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.545874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.546170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.546210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.546561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.546603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.546804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.546844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.547045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.547085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.547283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.547295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.547550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.547592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.547802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.547841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.548108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.548148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.548424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.548463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.548698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.548878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.548918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.549179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.549220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.549508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.549549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.549839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.549881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.550153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.550193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.550543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.550582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.550737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.550750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.550977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.550990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.551165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.551204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.551405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.551445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.551749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.551789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.745 [2024-06-11 14:07:57.552073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.745 [2024-06-11 14:07:57.552114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.745 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.552375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.552416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.552649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.552662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.552865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.552905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.553184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.553225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.553501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.553541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.553818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.553858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.554948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.554960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.555127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.555166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.555433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.555473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.555679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.555719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.556075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.556116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.556446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.556496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.556844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.556888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.557217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.557257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.557585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.557598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.557749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.557761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.557861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.557873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.558158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.558170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.558417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.558429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.558575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.558587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.558802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.558842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.559106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.559145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.559396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.559733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.559773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.560041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.560104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.560434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.560474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.560683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.560724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.560986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.561026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.561238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.561507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.561549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.561829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.561870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.562099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.562138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.562332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.746 [2024-06-11 14:07:57.562344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.746 qpair failed and we were unable to recover it. 00:40:04.746 [2024-06-11 14:07:57.562554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.562566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.562822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.562863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.563140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.563180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.563355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.563368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.563580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.563593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.563812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.563853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.564063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.564102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.564380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.564421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.564718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.564759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.565009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.565055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.565317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.565329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.565542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.565554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.565755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.565767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.566046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.566087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.566346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.566667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.566679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.566886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.566898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.567042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.567054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.567218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.567258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.567537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.567578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.567849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.567890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.568239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.568251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.568388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.568620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.568632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.568841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.568854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.569065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.569106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.569322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.569362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.569617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.569629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.569941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.569981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.570359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.570405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.570691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.570703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.570922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.570934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.571177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.571188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.571429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.571441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.571729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.571741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.572003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.572016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.572152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.572164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.572390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.572431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.747 qpair failed and we were unable to recover it. 00:40:04.747 [2024-06-11 14:07:57.572712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.747 [2024-06-11 14:07:57.572753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.572962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.573002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.573355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.573406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.573623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.573636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.573942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.573982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.574188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.574229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.574414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.574425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.574667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.574710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.575020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.575060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.575364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.575404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.575676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.575717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.576077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.576125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.576350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.576361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.576505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.576517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.576649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.576661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.576862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.576874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.577010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.577021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.577200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.577240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.577511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.577552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.577810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.577850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.578131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.578171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.578421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.578432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.578592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.578604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.578878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.578919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.579181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.579221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.579484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.579496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.579708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.579721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.579887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.579910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.580055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.580067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.580284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.580323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.580531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.580573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.580806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.580848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.581139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.581178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.581457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.581500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.581698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.581710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.581847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.581859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.582094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.582105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.582302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.582348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.748 [2024-06-11 14:07:57.582645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.748 [2024-06-11 14:07:57.582686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.748 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.582920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.582961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.583328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.583367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.583698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.583739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.584114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.584155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.584421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.584462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.584899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.584940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.585214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.585254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.585535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.585547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.585787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.585799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.585965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.585977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.586215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.586254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.586558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.586599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.586884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.586931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.587140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.587180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.587473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.587523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.587805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.587845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.588126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.588168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.588400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.588700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.588712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.588844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.588856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.589125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.589137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.589345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.589357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.589622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.589635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.589783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.589795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.590035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.590047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.590264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.590276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.590501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.590542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.590800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.591067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.591108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.591454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.591465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.591691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.749 [2024-06-11 14:07:57.591703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.749 qpair failed and we were unable to recover it. 00:40:04.749 [2024-06-11 14:07:57.591844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.591857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.592062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.592075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.592292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.592304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.592500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.592513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.592712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.592723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.592925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.592937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.593071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.593083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.593258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.593270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.593527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.593568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.593870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.593911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.594180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.594220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.594511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.594553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.594841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.594854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.595069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.595082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.595289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.595301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.595515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.595527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.595764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.595776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.595924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.595937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.596087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.596099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.596340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.596381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.596651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.596663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.596882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.596896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.597164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.597176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.597336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.597348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.597523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.597535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.597780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.597792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.597949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.597961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.598250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.598262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.598473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.598490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.598717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.598730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.599013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.599175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.599187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.599482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.599494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.599644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.599657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.599820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.599832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.600034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.600046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.600258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.600270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.600475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.750 [2024-06-11 14:07:57.600492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.750 qpair failed and we were unable to recover it. 00:40:04.750 [2024-06-11 14:07:57.600697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.600709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.600923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.600935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.601150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.601162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.601374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.601386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.601677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.601690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.601955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.601967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.602176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.602188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.602356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.602368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.602546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.602558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.602787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.602799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.602947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.602959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.603187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.603199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.603465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.603486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.603731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.603875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.603887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.604047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.604059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:04.751 [2024-06-11 14:07:57.604326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:04.751 [2024-06-11 14:07:57.604338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:04.751 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.604557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.604569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.604775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.604787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.605003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.605015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.605252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.605264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.605470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.605487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.605634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.605646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.605846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.605861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.606018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.606030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.606271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.606283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.606487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.606500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.606721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.606734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.606886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.606898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.607190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.607202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.607344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.607356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.607500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.607512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.607679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.607691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.607836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.607848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.608068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.608080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.608287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.608335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.608537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.608577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.608858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.608899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.609175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.609215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.609387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.609428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.609653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.609666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.609958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.609970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.610208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.610220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.610421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.610434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.610651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.610663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.610812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.610824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.611055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.611067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.611268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.611281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.611492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.028 [2024-06-11 14:07:57.611504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.028 qpair failed and we were unable to recover it. 00:40:05.028 [2024-06-11 14:07:57.611648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.611660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.611933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.611945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.612097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.612109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.612245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.612257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.612498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.612540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.612872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.612912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.613133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.613174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.613453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.613505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.613762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.613774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.614016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.614028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.614239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.614251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.614473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.614490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.614631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.614643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.614884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.614896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.615211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.615223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.615491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.615771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.615783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.615993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.616005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.617108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.617132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.617380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.617394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.617598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.617610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.617835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.617848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.617998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.618155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.618432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.618592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.618759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.618907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.618919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.619213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.619225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.619494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.619507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.619724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.619766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.620038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.620078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.620296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.620336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.620687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.620727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.621078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.621090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.621301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.621313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.621460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.621472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.621714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.029 [2024-06-11 14:07:57.621755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.029 qpair failed and we were unable to recover it. 00:40:05.029 [2024-06-11 14:07:57.621998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.622039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.622368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.622408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.622611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.622624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.622865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.622879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.623098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.623110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.623263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.623275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.623485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.623497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.623702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.623714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.623915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.623927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.624155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.624167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.624309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.624321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.624475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.624492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.624628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.624641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.624968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.624980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.625193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.625205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.625411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.625451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.625708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.625750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.625958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.625998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.626320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.626359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.626644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.626657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.626861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.626873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.627885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.627926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.628128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.628168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.628452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.628464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.628616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.628628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.628843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.628856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.629089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.629101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.629255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.629267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.629417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.629429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.629725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.629765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.630042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.630082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.630348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.030 [2024-06-11 14:07:57.630361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.030 qpair failed and we were unable to recover it. 00:40:05.030 [2024-06-11 14:07:57.630565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.630577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.630793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.630806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.631004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.631016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.631245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.631257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.631460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.631472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.631689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.631729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.631923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.631969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.632257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.632298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.632546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.632558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.632760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.632772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.632978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.632990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.633202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.633214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.633368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.633380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.633527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.633539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.633759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.633799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.634082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.634122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.634287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.634298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.634442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.634663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.634675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.634876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.634889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.635907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.635919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.636078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.636090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.636300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.636312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.636555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.636567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.636851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.636863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.637060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.637072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.637275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.637287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.637434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.637445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.637691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.637732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.637957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.637997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.638227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.638268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.638603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.638615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.638842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.638854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.031 qpair failed and we were unable to recover it. 00:40:05.031 [2024-06-11 14:07:57.639153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.031 [2024-06-11 14:07:57.639192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.639545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.639594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.639875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.639915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.640177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.640217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.640496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.640537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.640815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.640827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.641054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.641066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.641198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.641210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.641480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.641495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.641698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.641710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.641847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.641858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.642055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.642067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.642359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.642371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.642588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.642601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.642816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.642828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.643094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.643106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.643398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.643410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.643575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.643588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.643808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.643819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.644129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.644141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.644342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.644354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.644574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.644586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.644721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.644733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.644897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.644908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.645051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.645063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.645272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.645284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.645432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.645444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.645650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.645662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.032 [2024-06-11 14:07:57.645813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.032 [2024-06-11 14:07:57.645825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.032 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.645982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.645995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.646151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.646163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.646375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.646387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.646526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.646538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.646692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.646704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.646920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.646932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.647072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.647084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.647293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.647305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.647523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.647535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.647742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.647781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.648046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.648085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.648355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.648395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.648752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.648765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.649031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.649043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.649256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.649268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.649489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.649501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.649720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.649732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.649884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.649930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.650128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.650168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.650448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.650462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.650675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.650687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.650792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.650804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.651020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.651178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.651190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.651431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.651443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.651616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.651628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.651765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.651805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.652133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.652174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.652452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.652464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.652599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.652611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.652827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.652839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.653071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.653083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.653284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.653296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.653501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.653514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.653721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.653760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.653991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.654193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.654233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.654493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.033 [2024-06-11 14:07:57.654505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.033 qpair failed and we were unable to recover it. 00:40:05.033 [2024-06-11 14:07:57.654784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.654796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.655012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.655024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.655225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.655236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.655547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.655588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.655879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.655919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.656122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.656162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.656426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.656465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.656745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.656757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.656892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.656904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.657201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.657212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.657482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.657495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.657735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.657775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.658051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.658090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.658447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.658459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.658685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.658697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.658985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.658997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.659200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.659212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.659344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.659356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.659521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.659562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.659823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.659862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.660210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.660250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.660575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.660615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.660826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.661037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.661048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.661223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.661235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.661395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.661407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.661684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.661725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.661999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.662038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.662389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.662429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.662757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.662769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.662984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.662996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.663195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.663207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.663522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.663564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.663849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.663888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.664118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.664158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.664401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.664441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.664790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.034 qpair failed and we were unable to recover it. 00:40:05.034 [2024-06-11 14:07:57.665004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.034 [2024-06-11 14:07:57.665016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.665312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.665324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.665612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.665652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.665983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.666023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.666301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.666341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.666723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.666765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.667191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.667267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.667638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.667687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.668044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.668087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.668324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.668365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.668697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.668738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.669019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.669060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.669345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.669385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.669720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.669761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.670099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.670140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.670404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.670444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.670657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.670697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.670963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.671003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.671300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.671314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.671520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.671533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.671743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.671755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.671956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.671968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.672198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.672210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.672422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.672434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.672586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.672598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.672802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.672815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.673114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.673496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.673537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.673825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.673865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.674201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.674241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.674467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.674516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.674713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.674753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.675031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.675071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.675335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.675381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.675647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.675659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.675947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.675959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.676232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.676244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.676513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.676525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.035 [2024-06-11 14:07:57.676841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.035 [2024-06-11 14:07:57.676853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.035 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.677069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.677081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.677308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.677320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.677642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.677684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.677959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.677999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.678284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.678323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.678595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.678636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.678961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.678973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.679266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.679277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.679507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.679534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.679749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.679788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.680098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.680139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.680414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.680459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.680693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.680707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.680926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.680938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.681206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.681218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.681419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.681431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.681630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.681642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.681858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.681870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.682036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.682048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.682345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.682385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.682681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.682721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.682871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.683108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.683148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.683517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.683561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.683867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.683879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.684098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.684109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.684398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.684410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.684625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.684637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.684907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.684919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.685304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.685344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.685728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.685769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.686060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.686100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.686448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.686496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.686706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.686718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.686946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.686986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.687263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.687302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.687565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.687578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.687786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.036 [2024-06-11 14:07:57.687797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.036 qpair failed and we were unable to recover it. 00:40:05.036 [2024-06-11 14:07:57.688016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.688029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.688245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.688257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.688484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.688496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.688781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.688792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.689004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.689324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.689346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.689560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.689582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.689849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.689861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.690032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.690044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.690317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.690356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.690626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.690667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.690930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.690941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.691159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.691394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.691406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.691572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.691586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.691883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.691923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.692274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.692314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.692580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.692622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.692896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.692936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.693265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.693660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.693701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.693970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.693981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.694204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.694216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.694458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.694470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.694775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.694815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.695179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.695220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.695518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.695560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.695727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.695739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.695964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.696004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.696213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.696253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.696525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.696538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.696812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.037 [2024-06-11 14:07:57.696851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.037 qpair failed and we were unable to recover it. 00:40:05.037 [2024-06-11 14:07:57.697138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.697178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.697393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.697708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.697720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.697911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.697951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.698216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.698257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.698495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.698536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.698799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.698838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.699071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.699083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.699375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.699388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.699609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.699621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.699835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.699847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.700054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.700094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.700455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.700508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.700715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.700727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.701003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.701042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.701303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.701343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.701678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.701691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.701983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.701995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.702240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.702251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.702535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.702547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.702761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.702773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.702996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.703008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.703327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.703372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.703619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.703660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.703889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.703930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.704203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.704242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.704502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.704542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.704761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.704801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.705132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.705172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.705440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.705489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.705867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.705909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.706206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.706246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.706528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.706569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.706870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.706909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.707304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.707344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.707565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.707577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.707807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.708202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.038 [2024-06-11 14:07:57.708241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.038 qpair failed and we were unable to recover it. 00:40:05.038 [2024-06-11 14:07:57.708594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.708635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.708906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.708945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.709226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.709266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.709492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.709532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.709887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.709927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.710308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.710319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.710582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.710595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.710871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.710899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.711227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.711266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.711623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.711665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.712032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.712044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.712204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.712215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.712410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.712422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.712712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.712724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.712946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.712958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.713174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.713185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.713396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.713407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.713573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.713585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.713861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.713901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.714203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.714243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.714522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.714563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.714915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.714954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.715310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.715350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.715725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.715766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.716056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.716102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.716445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.716497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.716856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.716896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.717249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.717289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.717644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.717684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.717963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.718261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.718272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.718536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.718547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.718812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.718824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.719110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.719121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.719415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.719461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.719753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.719793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.720061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.720073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.720271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.039 [2024-06-11 14:07:57.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.039 qpair failed and we were unable to recover it. 00:40:05.039 [2024-06-11 14:07:57.720551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.720564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.720775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.720786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.720952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.720963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.721120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.721132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.721332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.721364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.721627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.721668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.721972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.722012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.722287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.722327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.722592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.722633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.722853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.722892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.723225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.723265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.723640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.723681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.723995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.724036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.724341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.724380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.724678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.724720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.725061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.725101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.725413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.725453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.725798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.725830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.726111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.726151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.726507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.726548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.726833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.726885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.727111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.727123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.727322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.727333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.727619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.727631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.727874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.727887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.728037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.728049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.728272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.728323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.728674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.728715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.729066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.729077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.729417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.729456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.729748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.729788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.730070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.730109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.730466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.730516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.730734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.730773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.731125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.731165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.731468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.731517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.731820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.731843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.732153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.040 [2024-06-11 14:07:57.732193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.040 qpair failed and we were unable to recover it. 00:40:05.040 [2024-06-11 14:07:57.732549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.732591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.732797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.732809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.732993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.733034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.733363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.733402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.733678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.733999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.734039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.734284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.734296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.734579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.734590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.734802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.734814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.735013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.735036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.735342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.735382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.735642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.735683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.736043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.736082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.736304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.736344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.736699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.736740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.737077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.737117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.737473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.737522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.737900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.737939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.738159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.738199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.738459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.738509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.738771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.738810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.739076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.739088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.739193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.739205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.739421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.739461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.739822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.739863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.740132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.740144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.740440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.740497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.740805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.740844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.741170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.741184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.741394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.741406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.741672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.741685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.741884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.741896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.742163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.742176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.742398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.742437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.041 qpair failed and we were unable to recover it. 00:40:05.041 [2024-06-11 14:07:57.742732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.041 [2024-06-11 14:07:57.742777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.743067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.743079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.743279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.743302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.743639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.743680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.743950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.743990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.744342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.744382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.744709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.744750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.745012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.745052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.745395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.745436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.745819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.745866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.746152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.746442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.746501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.746767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.746807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.747135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.747175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.747447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.747497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.747848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.747888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.748218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.748258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.748562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.748604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.748982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.749022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.749352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.749391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.749774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.749815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.750158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.750238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.750549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.750596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.750885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.750926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.751211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.751252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.751576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.751619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.751979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.752399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.752439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.752789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.752830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.753106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.753118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.753327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.753339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.753604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.753616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.753948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.753989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.754321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.754360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.754640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.754686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.754973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.042 [2024-06-11 14:07:57.755012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.042 qpair failed and we were unable to recover it. 00:40:05.042 [2024-06-11 14:07:57.755367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.755408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.755767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.755807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.756167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.756207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.756506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.756547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.756908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.756954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.757283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.757601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.757643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.757999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.758038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.758274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.758315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.758642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.758688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.759012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.759160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.759172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.759447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.759459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.759687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.759899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.759910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.760220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.760259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.760612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.760653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.760878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.760918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.761130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.761170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.761511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.761545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.761766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.761778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.762086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.762125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.762489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.762530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.762809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.762821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.763038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.763050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.763277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.763289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.763558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.763570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.763821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.763833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.764148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.764160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.764471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.764493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.764781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.764822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.765160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.765200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.765497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.765539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.765809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.765820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.766088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.766100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.766316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.766328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.766549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.766591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.766913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.766924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.043 qpair failed and we were unable to recover it. 00:40:05.043 [2024-06-11 14:07:57.767150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.043 [2024-06-11 14:07:57.767161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.767379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.767391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.767609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.767621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.767795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.767835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.768047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.768087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.768435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.768700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.768740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.769068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.769108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.769435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.769475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.769753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.769793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.770023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.770064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.770416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.770751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.770792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.771140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.771487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.771529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.771854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.771866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.772037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.772076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.772430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.772469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.772762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.772803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.773105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.773145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.773420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.773459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.773749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.773790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.773994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.774042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.774185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.774197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.774498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.774539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.774850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.774889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.775225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.775236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.775517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.775752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.775764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.775980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.775992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.776218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.776230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.776533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.776545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.776808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.776819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.777038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.777049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.777341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.777353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.777622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.777634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.777797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.777809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.044 [2024-06-11 14:07:57.777976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.044 [2024-06-11 14:07:57.778016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.044 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.778348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.778648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.778688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.778965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.779005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.779290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.779330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.779606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.779647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.780026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.780402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.780442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.780812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.780854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.781081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.781128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.781403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.781414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.781655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.781667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.781813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.781826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.782116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.782128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.782337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.782349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.782491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.782502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.782713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.782752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.783036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.783077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.783385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.783425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.783724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.783765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.784048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.784088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.784421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.784461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.784754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.784794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.785158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.785198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.785546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.785588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.785797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.785809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.786015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.786055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.786406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.786446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.786690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.786730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.787086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.787126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.787344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.787390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.787743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.787784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.788000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.788039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.788356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.788406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.788705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.788746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.788944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.788984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.789310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.789349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.789627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.789670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.789960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.045 [2024-06-11 14:07:57.789999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.045 qpair failed and we were unable to recover it. 00:40:05.045 [2024-06-11 14:07:57.790303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.790342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.790617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.790658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.790945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.790985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.791316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.791328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.791595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.791607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.791777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.791789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.792026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.792066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.792395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.792435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.792728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe04f70 is same with the state(5) to be set 00:40:05.046 [2024-06-11 14:07:57.793061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.793140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.793539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.793584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.793869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.793910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.794264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.794305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.794578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.794619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.794969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.795361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.795402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.795693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.795734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.796110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.796149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.796454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.796513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.796851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.796892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.797167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.797206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.797540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.797581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.797846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.797887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.798236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.798276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.798504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.798544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.798876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.798916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.799271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.799310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.799662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.799702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.799983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.800024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.800322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.800362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.800643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.800684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.800965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.046 [2024-06-11 14:07:57.801006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.046 qpair failed and we were unable to recover it. 00:40:05.046 [2024-06-11 14:07:57.801232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.801272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.801553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.801593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.801871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.801911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.802173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.802213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.802556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.802596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.802833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.802874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.803204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.803243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.803462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.803530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.803864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.803904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.804234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.804273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.804622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.804663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.805062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.805102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.805425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.805465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.805828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.805869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.806114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.806154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.806496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.806537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.806810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.806850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.807149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.807189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.807496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.807536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.807821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.807862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.808142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.808182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.808516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.808557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.808830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.808870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.809234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.809278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.809560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.809600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.809805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.809845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.810179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.810225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.810494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.810534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.810908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.810948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.811306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.811346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.811643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.811683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.812012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.812052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.812408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.812448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.812833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.812873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.813242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.813282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.813637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.813678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.814038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.814078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.814438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.814485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.047 [2024-06-11 14:07:57.814719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.047 [2024-06-11 14:07:57.814760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.047 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.815064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.815103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.815281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.815321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.815592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.815634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.815900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.816281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.816360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.816675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.817037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.817079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.817354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.817394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.817759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.817802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.818157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.818197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.818512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.818553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.818910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.818951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.819246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.819285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.819590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.819633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.819949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.819961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.820269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.820309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.820663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.820704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.820972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.821013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.821291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.821331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.821592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.821633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.821911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.821923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.822230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.822270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.822550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.822591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.822928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.822969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.823307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.823347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.823677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.823717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.823943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.823956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.824243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.824258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.824546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.824559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.824778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.824790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.825101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.825113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.825331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.825343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.825554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.825566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.825717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.825729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.826016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.826027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.826324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.826365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.826580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.826620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.826881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.048 [2024-06-11 14:07:57.826921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.048 qpair failed and we were unable to recover it. 00:40:05.048 [2024-06-11 14:07:57.827189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.827230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.827516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.827557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.827842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.827882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.828208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.828220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.828418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.828430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.828580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.828593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.828762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.828774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.829040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.829052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.829307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.829347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.829556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.829598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.829950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.829990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.830347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.830387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.830676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.830717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.831019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.831059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.831385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.831425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.831757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.831800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.832104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.832115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.832281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.832293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.832585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.832618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.832972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.833012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.833230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.833242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.833456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.833507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.833838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.833878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.834087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.834098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.834396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.834438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.834776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.834827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.835116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.835150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.835524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.835565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.835967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.836008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.836359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.836405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.836825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.836867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.837174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.837214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.837521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.837562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.837830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.837871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.838092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.838132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.838488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.838529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.838759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.838800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.839064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.049 [2024-06-11 14:07:57.839104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.049 qpair failed and we were unable to recover it. 00:40:05.049 [2024-06-11 14:07:57.839458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.839508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.839778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.839819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.840172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.840211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.840427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.840467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.840693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.840734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.841085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.841097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.841370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.841381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.841545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.841557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.841697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.841709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.841995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.842007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.842243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.842255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.842544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.842556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.842896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.842908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.843069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.843082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.843307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.843348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.843678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.843718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.844013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.844025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.844266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.844278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.844545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.844558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.844730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.844742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.845007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.845019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.845256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.845296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.845591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.845633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.845887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.845927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.846187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.846216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.846505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.846517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.846727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.846739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.846966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.847007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.847342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.847382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.847661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.847702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.847920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.847961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.848311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.848323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.848541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.848554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.848788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.848800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.849067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.849299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.849311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.849471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.849523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.849815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.050 [2024-06-11 14:07:57.849856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.050 qpair failed and we were unable to recover it. 00:40:05.050 [2024-06-11 14:07:57.850126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.051 [2024-06-11 14:07:57.850138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.051 qpair failed and we were unable to recover it. 00:40:05.051 [2024-06-11 14:07:57.850428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.051 [2024-06-11 14:07:57.850440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.051 qpair failed and we were unable to recover it. 00:40:05.051 [2024-06-11 14:07:57.850612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.051 [2024-06-11 14:07:57.850625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.051 qpair failed and we were unable to recover it. 00:40:05.051 [2024-06-11 14:07:57.850849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.051 [2024-06-11 14:07:57.850861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.851089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.851101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.851264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.851277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.851432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.851444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.851735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.851747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.851962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.851974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.852204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.852244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.852572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.852613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.852900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.852941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.853164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.853205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.853405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.853417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.853619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.853631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.853789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.853801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.854031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.854071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.854353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.854394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.854644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.854686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.854900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.854940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.855164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.855178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.855379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.855391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.855545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.855557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.855739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.855751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.856042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.856054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.856374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.856413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.856677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.856719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.856978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.856990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.857186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.857424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.857436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.857728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.857776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.858041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.858053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.858158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.858170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.858331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.858343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.858571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.858583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.858872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.858884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.859179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.859190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.859337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.859349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.859504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.859517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.859781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.859792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.859942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.859954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.052 qpair failed and we were unable to recover it. 00:40:05.052 [2024-06-11 14:07:57.860160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.052 [2024-06-11 14:07:57.860201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.860507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.860548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.860811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.860851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.861077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.861089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.861251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.861263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.861548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.861589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.861827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.861868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.862152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.862164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.862456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.862468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.862743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.863037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.863049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.863320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.863484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.863497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.863643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.863655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.863774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.863785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.864082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.864123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.864404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.864444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.864730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.864771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.865117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.865157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.865454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.865468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.865765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.865777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.866069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.866081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.866315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.866327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.866529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.866542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.866646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.866689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.867016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.867056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.867222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.867263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.867532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.867544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.867768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.867779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.867942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.867954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.868220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.868232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.868499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.868512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.868721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.868733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.868962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.869003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.869198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.869238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.869494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.869506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.869618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.869630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.869797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.869819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.870084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.053 [2024-06-11 14:07:57.870124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.053 qpair failed and we were unable to recover it. 00:40:05.053 [2024-06-11 14:07:57.870395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.870407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.870633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.870645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.870924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.870936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.871072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.871101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.871388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.871400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.871732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.871745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.872094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.872135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.872366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.872378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.872624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.872665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.872826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.872867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.873130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.873170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.873375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.873387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.873550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.873563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.873829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.873841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.874062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.874074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.874274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.874286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.874575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.874881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.874893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.875131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.875143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.875344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.875357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.875515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.875529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.875739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.875752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.876020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.876032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.876302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.876314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.876564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.876577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.876786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.876798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.877069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.877081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.877286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.877298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.877614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.877626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.877911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.877924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.878199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.878212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.878518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.878530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.878671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.878683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.878912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.878953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.879184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.879225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.879539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.879579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.054 [2024-06-11 14:07:57.879889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.054 [2024-06-11 14:07:57.879941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.054 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.880139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.880151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.880418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.880429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.880660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.880702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.880979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.881019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.881370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.881410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.881706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.881748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.882035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.882075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.882371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.882394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.882665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.882677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.882953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.882994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.883287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.883327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.883598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.883639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.883989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.884029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.884292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.884304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.884548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.884780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.884820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.885018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.885058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.885285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.885325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.885588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.885629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.885981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.886020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.886356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.886368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.886585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.886597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.886885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.886935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.887168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.887215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.887507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.887548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.887920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.887960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.888157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.888169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.888282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.888294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.888583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.888595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.888832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.888844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.889090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.889130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.889468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.889516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.889846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.889887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.890293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.890305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.890526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.890748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.890760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.890968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.891007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.055 [2024-06-11 14:07:57.891291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.055 [2024-06-11 14:07:57.891330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.055 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.891586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.891598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.891864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.891876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.892010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.892022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.892183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.892196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.892358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.892370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.892572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.892585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.892843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.892884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.893170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.893210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.893479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.893492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.893783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.893795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.894013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.894024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.894289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.894301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.894456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.894468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.894616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.894628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.894869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.894881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.895150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.895163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.895326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.895337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.895627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.895639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.895802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.895985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.895997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.896153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.896165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.896317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.896329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.896570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.896582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.896792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.896804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.897011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.897051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.897433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.897708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.897749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.898059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.898100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.898288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.898300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.898463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.898823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.898865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.899230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.056 [2024-06-11 14:07:57.899242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.056 qpair failed and we were unable to recover it. 00:40:05.056 [2024-06-11 14:07:57.899370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.899382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.899581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.899594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.899859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.899872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.900067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.900079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.900882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.900905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.901188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.901201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.901455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.901467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.901746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.901759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.901961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.901973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.902182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.902222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.902462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.902517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.902801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.902841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.903104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.903144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.903438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.903480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.903690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.903702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.903841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.903853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.904014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.904026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.904258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.904298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.904522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.904564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.904843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.904885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.905156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.905197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.905459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.905471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.905611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.905624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.905779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.905792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.906034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.906046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.906208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.906221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.906434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.906474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.906745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.906785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.907020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.907060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.907383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.907395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.907721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.907734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.907874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.907886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.908094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.908106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.908348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.908394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.908691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.908733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.908943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.908983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.909262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.909302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.909633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.909646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.057 [2024-06-11 14:07:57.909940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.057 [2024-06-11 14:07:57.909967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.057 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.910232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.910244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.910460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.910473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.910610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.910622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.910770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.910782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.911073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.911085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.911301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.911313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.911609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.911650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.911870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.911910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.912221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.912257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.912523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.912535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.912737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.912750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.912954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.912983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.913208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.913221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.913439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.913452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.913627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.913640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.913797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.913837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.914033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.914073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.914316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.914352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.914507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.914519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.914832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.914844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.915044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.915056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.915323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.915335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.915577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.915590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.915750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.915791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.916101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.916141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.916486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.916499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.916698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.916710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.916944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.916956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.917158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.917170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.917459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.917471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.917679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.917692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.917931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.917943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.918154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.918166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.918297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.918309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.918523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.918542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.918760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.918772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.919000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.919012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.058 qpair failed and we were unable to recover it. 00:40:05.058 [2024-06-11 14:07:57.919303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.058 [2024-06-11 14:07:57.919316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.059 qpair failed and we were unable to recover it. 00:40:05.059 [2024-06-11 14:07:57.919556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.059 [2024-06-11 14:07:57.919568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.059 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.919753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.919765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.920074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.920086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.920232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.920244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.920462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.920474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.920704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.920716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.921008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.921020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.921177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.921189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.921407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.921419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.921618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.921631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.921835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.921847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.922071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.922084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.335 qpair failed and we were unable to recover it. 00:40:05.335 [2024-06-11 14:07:57.922396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.335 [2024-06-11 14:07:57.922408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.922624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.922637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.922838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.922850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.922996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.923008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.923160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.923394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.923435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.923732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.923773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.924003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.924043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.924334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.924367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.924643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.924656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.924855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.924867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.925157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.925170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.925369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.925381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.925616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.925657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.925875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.925916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.926191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.926232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.926530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.926570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.926907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.927106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.927118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.927377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.927418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.927800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.927841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.928072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.928112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.928308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.928348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.928702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.928745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.929116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.929162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.929483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.929495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.929724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.929736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.930002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.930014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.930222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.930234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.930505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.930518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.930740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.930752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.931027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.931277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.931318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.931695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.336 [2024-06-11 14:07:57.931735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.336 qpair failed and we were unable to recover it. 00:40:05.336 [2024-06-11 14:07:57.932010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.932050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.932330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.932371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.932720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.932761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.933043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.933084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.933362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.933402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.933714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.933726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.934019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.934059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.934322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.934363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.934603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.934616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.934824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.934836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.935060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.935101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.935322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.935362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.935628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.935669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.935947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.935988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.936183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.936195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.936526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.936568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.936938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.936979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.937199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.937211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.937494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.937535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.937892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.937933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.938081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.938093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.938390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.938429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.938707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.938751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.939011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.939052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.939281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.939293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.939512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.939554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.939883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.939923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.940252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.940292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.940574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.940616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.940923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.940974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.941293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.941339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.941623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.941941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.941982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.337 [2024-06-11 14:07:57.942199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.337 [2024-06-11 14:07:57.942238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.337 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.942565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.942578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.942902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.943227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.943267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.943632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.943674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.943936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.943977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.944186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.944198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.944408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.944420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.944645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.944657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.944809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.944849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.945141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.945182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.945450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.945499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.945762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.945802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.946084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.946125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.946404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.946780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.946820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.947153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.947194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.947468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.947519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.947738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.947778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.948047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.948089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.948436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.948448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.948707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.948748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.949083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.949130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.949447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.949497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.949737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.949778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.950045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.950087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.950339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.950351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.950645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.950657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.950865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.950877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.951146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.951158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.951306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.951318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.951489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.951516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.951667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.951678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.951887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.951899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.952054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.952066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.952351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.952363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.952583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.952595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.952752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.338 [2024-06-11 14:07:57.952766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.338 qpair failed and we were unable to recover it. 00:40:05.338 [2024-06-11 14:07:57.953079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.953091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.953342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.953382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.953737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.953780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.954062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.954101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.954399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.954439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.954783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.954825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.955176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.955500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.955541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.955890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.955932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.956143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.956183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.956447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.956459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.956749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.956761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.956874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.956885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.957160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.957200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.957491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.957533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.957887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.957928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.958278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.958318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.958590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.958602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.958851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.958891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.959191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.959231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.959512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.959524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.959879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.959919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.960273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.960314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.960587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.960599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.960798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.960810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.961110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.961149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.961438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.961487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.961808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.961840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.962135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.962175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.962440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.962489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.962773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.962814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.963166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.963205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.963429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.339 [2024-06-11 14:07:57.963470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.339 qpair failed and we were unable to recover it. 00:40:05.339 [2024-06-11 14:07:57.963801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.963813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.963899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.963911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.964010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.964022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.964311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.964323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.964528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.964569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.964919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.964960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.965235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.965249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.965460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.965471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.965697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.965710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.965874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.965886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.966032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.966043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.966261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.966273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.966562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.966574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.966785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.966797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.967081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.967112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.967418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.967458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.967818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.967859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.968102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.968143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.968353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.968393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.968749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.968790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.969152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.969187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.969429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.969441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.969665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.969678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.969946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.969958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.970105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.970117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.970380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.340 [2024-06-11 14:07:57.970393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.340 qpair failed and we were unable to recover it. 00:40:05.340 [2024-06-11 14:07:57.970660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.970672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.970825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.970837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.971036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.971049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.971379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.971421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.971709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.971750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.972085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.972126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.972464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.972737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.972750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.973055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.973096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.973429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.973470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.973771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.973812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.974041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.974082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.974418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.974460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.974823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.974865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.975148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.975189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.975467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.975518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.975751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.975792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.976076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.976116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.976397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.976434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.976588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.976600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.976817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.976834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.977060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.977073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.977314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.977326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.977553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.977856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.977868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.978082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.978094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.978362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.978373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.978663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.978698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.979026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.979067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.979304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.979344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.979578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.979620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.979905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.979945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.980160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.980200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.341 qpair failed and we were unable to recover it. 00:40:05.341 [2024-06-11 14:07:57.980470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.341 [2024-06-11 14:07:57.980520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.980876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.980917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.981178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.981218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.981520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.981570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.981741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.981752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.981954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.981965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.982185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.982227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.982605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.982645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.982920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.982960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.983231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.983273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.983482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.983494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.983806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.983837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.984192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.984232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.984402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.984442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.984651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.984692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.984977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.985018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.985386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.985426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.985680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.985692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.985902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.985914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.986130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.986142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.986354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.986366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.986635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.986647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.986794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.986806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.986948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.986960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.987161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.987173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.987402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.987442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.987780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.987820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.988175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.988222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.988501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.988542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.988878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.988918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.989274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.989315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.989521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.989533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.989743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.989784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.990061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.990101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.990487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.342 [2024-06-11 14:07:57.990528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.342 qpair failed and we were unable to recover it. 00:40:05.342 [2024-06-11 14:07:57.990861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.990901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.991135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.991176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.991454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.991503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.991701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.991713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.991954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.991994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.992285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.992325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.992581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.992594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.992777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.992818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.993117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.993157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.993421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.993461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.993679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.993720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.994002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.994041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.994347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.994387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.994689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.994701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.994968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.994980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.995128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.995140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.995351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.995392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.995678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.995719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.996063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.996102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.996394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.996441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.996759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.996771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.996998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.997010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.997133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.997145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.997371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.997767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.997809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.998087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.998128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.998404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.998416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.998639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.998651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.998918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.998931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.999131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.999143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.999366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.999378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.999604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:57.999646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:57.999980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:58.000020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:58.000291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:58.000332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.343 [2024-06-11 14:07:58.000567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.343 [2024-06-11 14:07:58.000580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.343 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.000809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.000849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.001189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.001229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.001469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.001484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.001689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.001701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.001853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.001865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.002076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.002089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.002290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.002302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.002503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.002515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.002717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.002729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.002939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.002951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.003168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.003180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.003348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.003360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.003565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.003605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.003870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.003910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.004266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.004307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.004591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.004631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.004990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.005031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.005298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.005339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.005622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.005662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.005971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.006012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.006280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.006321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.006580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.006621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.006861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.007161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.007173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.007327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.007478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.007491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.007730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.007742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.007968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.007980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.008192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.008204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.008350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.008362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.344 qpair failed and we were unable to recover it. 00:40:05.344 [2024-06-11 14:07:58.008653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.344 [2024-06-11 14:07:58.008665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.008843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.008883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.009114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.009155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.009495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.009536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.009800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.009842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.010075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.010116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.010468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.010519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.010844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.010885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.011175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.011217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.011513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.011525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.011741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.011753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.011958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.011999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.012279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.012319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.012573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.012586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.012719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.012732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.013012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.013052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.013317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.013329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.013565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.013606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.013949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.013989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.014298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.014338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.014581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.014594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.014772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.014784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.015082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.015094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.015261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.015274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.015512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.015553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.015768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.015808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.016168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.016209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.016376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.016416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.016718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.016758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.345 qpair failed and we were unable to recover it. 00:40:05.345 [2024-06-11 14:07:58.017070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.345 [2024-06-11 14:07:58.017082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.017414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.017454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.017772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.017813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.018080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.018120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.018343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.018355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.018580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.018626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.018914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.018955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.019215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.019472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.019488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.019751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.019763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.019982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.019994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.020204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.020216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.020541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.020554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.020772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.020784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.020932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.020944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.021049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.021062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.021192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.021204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.021450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.021500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.021729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.021770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.022131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.022172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.022475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.022524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.022864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.022905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.023184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.023224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.023567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.023608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.023852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.023863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.024104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.024116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.024324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.024336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.024630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.024643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.024815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.024826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.346 [2024-06-11 14:07:58.024979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.346 [2024-06-11 14:07:58.024991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.346 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.025149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.025193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.025405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.025446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.025699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.025740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.025956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.025997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.026290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.026330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.026570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.026582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.026875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.026909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.027246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.027287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.027567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.027579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.027909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.027950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.028234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.028274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.028553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.028593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.028865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.028907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.029199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.029238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.029507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.029547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.029848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.029862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.030065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.030077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.030388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.030400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.030627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.030798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.030810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.031049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.031089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.031421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.031463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.031780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.031791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.032008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.032050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.032407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.032448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.032780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.032792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.032944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.032956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.033225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.033237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.033537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.033549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.033773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.033814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.034042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.034084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.034357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.034398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.034772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.347 [2024-06-11 14:07:58.035168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.347 [2024-06-11 14:07:58.035209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.347 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.035495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.035536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.035838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.035879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.036151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.036191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.036470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.036510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.036777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.036789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.037084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.037096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.037258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.037270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.037483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.037495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.037765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.037777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.037921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.037933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.038065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.038076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.038360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.038372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.038590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.038798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.038810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.039043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.039085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.039414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.039454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.039649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.039661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.039936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.040277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.040318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.040548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.040560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.040857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.040909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.041145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.041191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.041396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.041434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.041577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.041589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.041907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.041947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.042291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.042331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.042573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.042615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.042915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.042955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.043249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.043288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.043526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.043568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.043798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.044121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.044161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.044401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.044442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.044701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.044713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.044954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.044966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.045210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.045222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.348 qpair failed and we were unable to recover it. 00:40:05.348 [2024-06-11 14:07:58.045435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.348 [2024-06-11 14:07:58.045446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.045622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.045663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.045863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.045905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.046196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.046237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.046517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.046558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.046836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.046848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.047056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.047067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.047315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.047355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.047638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.047680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.047924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.047936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.048049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.048061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.048252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.048264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.048483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.048495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.048661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.048673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.048898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.048939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.049206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.049246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.049506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.049519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.049789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.049801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.050000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.050012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.050160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.050171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.050439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.050451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.050729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.050741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.050990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.051030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.051318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.051359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.051706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.051736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.052018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.052064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.052277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.052527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.052540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.052853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.052893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.053170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.053211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.053490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.053879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.053920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.054213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.054254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.054537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.054578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.054825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.054837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.055114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.055126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.349 qpair failed and we were unable to recover it. 00:40:05.349 [2024-06-11 14:07:58.055326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.349 [2024-06-11 14:07:58.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.055549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.055561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.055774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.055786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.056063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.056103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.056383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.056423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.056716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.056729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.056941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.056953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.057184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.057224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.057440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.057840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.057881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.058166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.058207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.058503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.058516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.058717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.058728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.058994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.059006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.059212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.059224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.059428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.059439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.059645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.059658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.059896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.059909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.060213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.060253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.060527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.060568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.060828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.061094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.061135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.061419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.061459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.061761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.061774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.062031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.062199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.062211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.062361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.062402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.062675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.062716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.062989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.063029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.063240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.063287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.063610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.063621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.063886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.063898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.064183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.350 [2024-06-11 14:07:58.064195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.350 qpair failed and we were unable to recover it. 00:40:05.350 [2024-06-11 14:07:58.064497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.064538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.064814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.064854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.065138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.065179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.065393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.065433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.065864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.065941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.066237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.066281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.066568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.066611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.066945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.066988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.067271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.067312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.067652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.067663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.067899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.067911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.068112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.068124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.068367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.068582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.068594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.068863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.068875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.069159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.069170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.069404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.069415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.069681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.069693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.069920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.069932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.070195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.070207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.070432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.070444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.070655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.070934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.071236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.071248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.071531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.071801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.071813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.071974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.071987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.072134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.072146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.072295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.072307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.072518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.072530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.072828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.351 [2024-06-11 14:07:58.072841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.351 qpair failed and we were unable to recover it. 00:40:05.351 [2024-06-11 14:07:58.073060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.073072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.073230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.073242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.073440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.073452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.073619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.073631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.073916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.073928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.076672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.076687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.077003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.077015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.077305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.077317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.077475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.077492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.077761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.077773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.078048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.078089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.078423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.078463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.078712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.078724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.078993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.079005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.079223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.079234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.079472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.079488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.079702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.079714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.079866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.079878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.080121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.080162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.080508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.080551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.080928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.080968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.081240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.081280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.081627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.081640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.081934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.081946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.082229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.082269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.082534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.082577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.082832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.082874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.083137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.083177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.083513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.083564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.083774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.083786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.352 [2024-06-11 14:07:58.084068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.352 [2024-06-11 14:07:58.084119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.352 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.084342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.084382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.084819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.084903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.085307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.085385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.085737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.085783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff238000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.086118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.086162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.086465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.086514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.086842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.086855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.087091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.087103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.087335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.087346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.087495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.087523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.087747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.087787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.088147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.088187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.088462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.088517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.088837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.088876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.089160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.089205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.089400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.089437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.089660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.089673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.089873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.089885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.090115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.090127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.090360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.090372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.090483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.090495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.090791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.090832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.091054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.091094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.091368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.091409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.091754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.091766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.091991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.092003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.092301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.092341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.092673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.092714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.093049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.093090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.093378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.093418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.093723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.093763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.094029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.094041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.094246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.094257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.094413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.353 [2024-06-11 14:07:58.094425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.353 qpair failed and we were unable to recover it. 00:40:05.353 [2024-06-11 14:07:58.094714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.094726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.095028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.095040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.095243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.095255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.095536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.095548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.095762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.095774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.096057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.096070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.096359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.096370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.096480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.096493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.096642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.096655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.096880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.096920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.097247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.097287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.097642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.097684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.097928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.097940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.098096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.098108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.098351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.098363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.098637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.098678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.098961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.099002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.099217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.099258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.099521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.099561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.099808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.099821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.100117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.100131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.100410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.100795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.100872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.101255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.101300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.101601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.101645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.101968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.101981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.102212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.102253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.102534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.102575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.102895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.102907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.103175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.103188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.103387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.103399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.103626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.103638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.103848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.103861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.104127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.104139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.104304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.104316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.354 [2024-06-11 14:07:58.104541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.354 [2024-06-11 14:07:58.104582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.354 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.104860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.104900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.105227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.105267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.105604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.105645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.105926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.105966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.106190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.106230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.106590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.106632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.107009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.107048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.107314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.107354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.107651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.107693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.108017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.108029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.108252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.108264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.108410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.108424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.108648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.108660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.108877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.108889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.109109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.109122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.109337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.109349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.109640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.109652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.109816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.109828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.110038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.110050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.110190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.110202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.110415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.110428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.110632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.110644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.110851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.110891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.111112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.111525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.111566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.111852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.111892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.112177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.112217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.112498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.112539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.112800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.112841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.113165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.113178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.113473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.113487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.113722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.113762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.114025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.114066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.114361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.114401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.355 [2024-06-11 14:07:58.114671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.355 [2024-06-11 14:07:58.114683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.355 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.114901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.114914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.115112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.115123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.115387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.115427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.115767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.115809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.116160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.116172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.116317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.116329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.116618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.116631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.116847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.116859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.117070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.117082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.117284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.117296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.117585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.117597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.117760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.117772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.117986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.117998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.118209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.118221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.118435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.118447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.118653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.118665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.118883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.119280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.119320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.119641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.119685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.119976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.120016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.120318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.120358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.120692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.120704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.120875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.120887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.121099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.121111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.121319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.121331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.121601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.121613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.356 qpair failed and we were unable to recover it. 00:40:05.356 [2024-06-11 14:07:58.121735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.356 [2024-06-11 14:07:58.121747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.121917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.121929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.122129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.122141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.122364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.122376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.122677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.122690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.122842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.122854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.123075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.123087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.123389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.123430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.123783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.124066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.124078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.124288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.124300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.124588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.124600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.124908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.124948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.125283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.125325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.125620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.125632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.125875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.125887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.126179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.126191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.126344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.126356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.126575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.126587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.126800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.126812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.127085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.127097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.127453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.127504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.127781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.127794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.128060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.128072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.128210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.128222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.128435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.128448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.128606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.128892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.128932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.129230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.129271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.129599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.129640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.129888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.129933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.130220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.130260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.130535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.130576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.130926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.130966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.131163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.131204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.357 qpair failed and we were unable to recover it. 00:40:05.357 [2024-06-11 14:07:58.131448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.357 [2024-06-11 14:07:58.131505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.131731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.131771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.132034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.132047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.132140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.132152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.132361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.132373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.132660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.132673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.132938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.132950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.133179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.133191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.133408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.133420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.133691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.133703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.133857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.133869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.134003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.134015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.134285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.134297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.134584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.134596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.134808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.134821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.135059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.135071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.135368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.135408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.135706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.135748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.136014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.136026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.136177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.136189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.136493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.136535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.136866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.136906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.137252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.137292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.137647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.137688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.138015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.138027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.138314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.138326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.138523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.138536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.138747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.138759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.138919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.139100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.139112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.139323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.139335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.139601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.139613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.139818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.139830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.139996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.140008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.140178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.140190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.140459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.140473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.140704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.140716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.140942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.140954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.358 qpair failed and we were unable to recover it. 00:40:05.358 [2024-06-11 14:07:58.141156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.358 [2024-06-11 14:07:58.141168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.141450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.141462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.141637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.141649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.141920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.141932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.142077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.142089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.142333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.142345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.142503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.142516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.142617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.142629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.142896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.142908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.143200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.143213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.143360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.143372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.143572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.143585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.143862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.143874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.144164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.144176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.144451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.144503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.144713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.144754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.145097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.145127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.145408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.145447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.145751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.145764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.145897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.145909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.146208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.146248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.146545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.146587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.146850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.146862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.147128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.147140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.147347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.147359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.147568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.147580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.147743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.147754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.148034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.148046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.148310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.148526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.148567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.148797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.148809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.149040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.149079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.149430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.149470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.149694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.149735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.150066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.150077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.150309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.359 [2024-06-11 14:07:58.150321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.359 qpair failed and we were unable to recover it. 00:40:05.359 [2024-06-11 14:07:58.150599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.150612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.150779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.150792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.151002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.151042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.151321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.151362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.151587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.151628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.151850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.151862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.152025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.152037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.152278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.152289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.152519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.152531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.152752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.152764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.152977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.152988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.153229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.153495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.153537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.153856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.153868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.154038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.154050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.154353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.154407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.154703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.154744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.155020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.155061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.155330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.155370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.155722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.155763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.156031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.156071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.156362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.156403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.156622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.156663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.156817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.156864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.157163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.157174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.157448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.157495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.157768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.157808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.158077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.158089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.158297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.158309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.158522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.158535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.158696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.158708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.158915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.159240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.159281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.159637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.159679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.159963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.160003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.360 qpair failed and we were unable to recover it. 00:40:05.360 [2024-06-11 14:07:58.160332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.360 [2024-06-11 14:07:58.160371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.160640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.160682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.160963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.161003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.161260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.161272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.161547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.161560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.161666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.161678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.161915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.161928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.162196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.162208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.162421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.162432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.162530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.162542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.162849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.162890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.163243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.163282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.163643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.163684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.164045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.164056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.164321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.164332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.164555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.164567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.164782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.164794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.165058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.165070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.165309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.165321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.165489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.165502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.165719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.165730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.165938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.165977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.166334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.166375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.166723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.166764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.167140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.167179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.167471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.167522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.167810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.167860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.168069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.168081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.168342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.168354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.168582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.168594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.168788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.168800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.169008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.169049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.169382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.169422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.169712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.169753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.361 qpair failed and we were unable to recover it. 00:40:05.361 [2024-06-11 14:07:58.170110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.361 [2024-06-11 14:07:58.170121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.170384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.170396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.170609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.170621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.170835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.170876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.171123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.171163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.171443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.171834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.171875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.172114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.172126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.172337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.172349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.172654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.172696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.172936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.172947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.173108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.173148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.173446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.173503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.173825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.173837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.174325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.174338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.174556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.174569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.174794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.174805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.175082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.175094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.175305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.175317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.175533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.175545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.175648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.175660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.175805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.175816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.176033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.176073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.176406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.176448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.176785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.176825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.177099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.177140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.177497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.177539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.177702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.177713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.177940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.177979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.178215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.178256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.178522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.178563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.178913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.178953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.179190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.179230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.179426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.179465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.179836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.179876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.180224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.180235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.180575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.362 [2024-06-11 14:07:58.180616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.362 qpair failed and we were unable to recover it. 00:40:05.362 [2024-06-11 14:07:58.180823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.180871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.181117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.181129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.181394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.181406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.181673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.181685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.181886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.181897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.182106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.182146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.182490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.182531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.182903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.182957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.183291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.183331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.183687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.183728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.184061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.184103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.184387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.184427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.184758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.184799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.185156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.185196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.185549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.185596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.185917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.185929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.186077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.186089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.186356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.186367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.186576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.186822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.186834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.187107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.187147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.187432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.187473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.187691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.187703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.187968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.188115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.188127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.188449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.188499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.188850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.188891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.189242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.189281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.189564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.189605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.189867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.189907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.190145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.190158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.190442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.190454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.190665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.190677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.190888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.190900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.191189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.191223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.191494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.191534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.191880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.191920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.192191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.192233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.363 [2024-06-11 14:07:58.192514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.363 [2024-06-11 14:07:58.192555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.363 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.192741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.192782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.193052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.193090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.193396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.193408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.193648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.193661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.193957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.193992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.194222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.194261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.194538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.194579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.194886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.194899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.195111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.195122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.195429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.195470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.195774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.195816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.196027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.196039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.196207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.196247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.196595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.196637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.196969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.196981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.197194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.197208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.197443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.197455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.197748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.197761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.197916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.197928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.198082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.198122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.198393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.198433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.198786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.198828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.199060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.199101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.199449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.199499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.199834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.199874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.200155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.200197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.200458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.200508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.200790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.200831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.201117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.201161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.201449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.201499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.201778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.201819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.202040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.202052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.202312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.202325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.202544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.202557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.202821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.202833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.203033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.203255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.364 [2024-06-11 14:07:58.203267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.364 qpair failed and we were unable to recover it. 00:40:05.364 [2024-06-11 14:07:58.203463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.203479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.203709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.203721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.203883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.203896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.204051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.204064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.204259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.204271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.204443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.204455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.204675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.204688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.204916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.204928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.205193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.205205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.205414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.205426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.205589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.205601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.205881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.206953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.206965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.207238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.207283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.207546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.207586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.207940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.207981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.208247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.208287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.208563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.208604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.208888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.208928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.209153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.209166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.209381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.209393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.209648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.209660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.209831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.209843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.209999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.210181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.210460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.210472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.210637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.210649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.365 [2024-06-11 14:07:58.210863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.365 [2024-06-11 14:07:58.210875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.365 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.211086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.211098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.211331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.211343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.211563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.211576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.211808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.211849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.212128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.212168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.212379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.212420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.212689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.212731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.213015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.213055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.213447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.213494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.213705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.213745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.214075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.214115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.214494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.214535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.214857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.214942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.215197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.215237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.215429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.215451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.215697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.215936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.215976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.216195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.216236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.216538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.216579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.216774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.216814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.217092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.217105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.217337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.217349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.217614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.217627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.217771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.217783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.217946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.217958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.218164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.218177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.218329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.218342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.218492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.218504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.218644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.218656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.218883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.218896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.219140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.219151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.219302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.219314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.219524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.219537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.219754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.219793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.220008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.220318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.220359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.220721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.220762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.366 qpair failed and we were unable to recover it. 00:40:05.366 [2024-06-11 14:07:58.221033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.366 [2024-06-11 14:07:58.221073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.221363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.221403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.221692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.221733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.221920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.221932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.222159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.222171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.222332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.222344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.222506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.222518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.222676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.222840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.222852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.223141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.223154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.223359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.223371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.223527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.223539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.223806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.223846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.224161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.224202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.224528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.224569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.224908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.224986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.225207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.225251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.225584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.225628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.225880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.225930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.226213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.226254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.226561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.226756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.226796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.227086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.227133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.227280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.227292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.227492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.227504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.367 [2024-06-11 14:07:58.227706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.367 [2024-06-11 14:07:58.227718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.367 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.227858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.227870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.228040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.228052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.228321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.228334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.228564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.228576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.228841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.228854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.229977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.229989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.230192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.230205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.230469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.230485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.230632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.230644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.230936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.230948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.231160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.231172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.231373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.231385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.231586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.231599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.231800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.231812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.232016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.232028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.232259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.232271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.232446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.232695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.232735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.232972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.233012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.233281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.233293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.233510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.233522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.233683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.233695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.233902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.233942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.234271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.234311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.234705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.234757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.235035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.645 [2024-06-11 14:07:58.235075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.645 qpair failed and we were unable to recover it. 00:40:05.645 [2024-06-11 14:07:58.235330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.235343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.235524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.235537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.235843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.235856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.236077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.236089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.236251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.236264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.236465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.236481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.236763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.236803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.237162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.237202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.237544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.237584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.237845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.237885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.238221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.238233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.238524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.238536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.238837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.238877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.239212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.239252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.239561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.239601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.239920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.239961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.240229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.240269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.240581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.240622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.240952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.240993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.241296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.241308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.241598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.241610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.241771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.241783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.241930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.241943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.242176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.242187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.242385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.242397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.242693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.242734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.243068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.243109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.243387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.243427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.243722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.243763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.243983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.244025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.244282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.244294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.244593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.244605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.244750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.244762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.245059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.245099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.245439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.245486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.245770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.245810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.246139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.246151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.246411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.646 [2024-06-11 14:07:58.246451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.646 qpair failed and we were unable to recover it. 00:40:05.646 [2024-06-11 14:07:58.246819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.246866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.247244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.247284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.247564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.247605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.247890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.247902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.248107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.248147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.248359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.248398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.248700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.248742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.249102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.249143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.249488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.249529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.249826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.249866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.250220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.250261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.250583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.250624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.250961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.251002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.251265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.251305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.251686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.251728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.252061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.252102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.252319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.252537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.252577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.252851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.252863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.253063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.253074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.253307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.253319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.253636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.253648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.253883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.253923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.254203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.254242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.254495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.254537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.254890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.254930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.255152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.255164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.255440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.255491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.255835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.255877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.256047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.256059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.256307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.256319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.256546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.256587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.256848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.256888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.257258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.257294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.257558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.257599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.257927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.257966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.258194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.258235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.258532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.258575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.647 qpair failed and we were unable to recover it. 00:40:05.647 [2024-06-11 14:07:58.258856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.647 [2024-06-11 14:07:58.258896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.259168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.259180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.259455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.259470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.259696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.259708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.259974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.259986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.260204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.260215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.260360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.260372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.260480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.260492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.260702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.260713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.260920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.260932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.261207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.261248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.261529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.261570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.261865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.261904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.262258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.262299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.262515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.262557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.262931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.262972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.263319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.263360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.263704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.263745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.264124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.264164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.264523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.264564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.264934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.264974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.265247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.265286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.265651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.265693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.265949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.265961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.266126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.266137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.266288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.266300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.266527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.266539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.266741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.266753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.266917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.266929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.267065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.267077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.267353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.267393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.267619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.267662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.267872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.268153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.268165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.268381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.268393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.268607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.268619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.268844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.268885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.269185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.269225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.648 qpair failed and we were unable to recover it. 00:40:05.648 [2024-06-11 14:07:58.269454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.648 [2024-06-11 14:07:58.269465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.269686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.269698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.269849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.269861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.270157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.270197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.270495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.270542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.270839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.270874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.271184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.271224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.271521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.271562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.271829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.271869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.272134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.272175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.272450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.272497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.272798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.272839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.273189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.273229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.273603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.273644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.273929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.273971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.274256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.274295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.274568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.274609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.274943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.274984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.275342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.275382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.275654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.275695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.275906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.276150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.276161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.276470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.276486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.276756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.276769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.277032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.277044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.277254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.277266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.277558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.277598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.277928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.277940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.278140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.278151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.278313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.278354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.649 qpair failed and we were unable to recover it. 00:40:05.649 [2024-06-11 14:07:58.278627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.649 [2024-06-11 14:07:58.278669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.279028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.279068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.279419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.279431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.279696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.279708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.279973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.279985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.280205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.280217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.280360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.280372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.280598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.280639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.281017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.281058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.281307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.281319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.281557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.281569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.281782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.281794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.281999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.282011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.282328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.282367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.282652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.282699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.282974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.283014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.283207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.283247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.283597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.283638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.283967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.284293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.284333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.284556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.284598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.284947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.284987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.285337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.285376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.285726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.285768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.286143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.286182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.286459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.286530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.286819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.286860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.287208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.287248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.287467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.287516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.287853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.287895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.288155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.288195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.288567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.288607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.288960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.289001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.289277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.289319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.289616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.289628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.289850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.289862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.290071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.290083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.290279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.290291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.650 [2024-06-11 14:07:58.290462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.650 [2024-06-11 14:07:58.290474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.650 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.290780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.290821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.291098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.291138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.291415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.291428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.291691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.291703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.291993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.292005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.292210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.292222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.292436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.292447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.292591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.292603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.292821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.292833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.293061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.293091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.293395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.293435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.293803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.293845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.294142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.294153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.294352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.294363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.294575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.294588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.294860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.294906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.295250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.295261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.295555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.295596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.295901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.295941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.296210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.296249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.296581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.296622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.296927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.296968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.297249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.297260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.297493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.297505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.297785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.298123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.298163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.298520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.298562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.298903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.298939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.299246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.299286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.299626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.300046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.300085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.300360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.300400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.300710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.300752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.300965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.301004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.301282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.301322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.301628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.301675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.302004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.302044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.302324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.651 [2024-06-11 14:07:58.302364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.651 qpair failed and we were unable to recover it. 00:40:05.651 [2024-06-11 14:07:58.302665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.302706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.303058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.303098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.303386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.303426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.303667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.303709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.304059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.304137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.304507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.304553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.304730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.304772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.305057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.305098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.305312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.305352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.305699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.305741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.306026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.306067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.306385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.306424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.306735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.306776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf6f80 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.307050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.307092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.307446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.307497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.307812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.307853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.308112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.308152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.308442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.308470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.308836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.308877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.309173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.309185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.309400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.309412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.309702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.309715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.309923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.309963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.310237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.310277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.310623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.310653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.310867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.310907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.311167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.311207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.311474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.311490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.311704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.311715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.311934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.311945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.312181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.312193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.312412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.312452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.312682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.312722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.313057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.313098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.313450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.313499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.313842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.313881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.314264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.314305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.314671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.314712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.652 [2024-06-11 14:07:58.314986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.652 [2024-06-11 14:07:58.315026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.652 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.315380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.315420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.315794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.315834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.316132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.316173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.316456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.316505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.316738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.316778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.317003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.317049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.317324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.317336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.317441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.317452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.317689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.317939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.317951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.318250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.318291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.318620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.318661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.319023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.319064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.319339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.319379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.319676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.319717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.320049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.320090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.320377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.320389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.320554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.320566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.320831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.320843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.321147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.321188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.321542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.321889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.321930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.322258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.322297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.322575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.322616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.322853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.322894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.323243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.323283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.323560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.323601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.323932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.323973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.324255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.324295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.324563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.324575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.324861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.324873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.325136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.325148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.325308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.325319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.325531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.325544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.325757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.325796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.326098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.326138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.326413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.326425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.653 [2024-06-11 14:07:58.326690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.653 [2024-06-11 14:07:58.326702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.653 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.326870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.326882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.327113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.327332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.327506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.327518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.327629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.327641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.327872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.328119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.328130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.328301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.328315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.328512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.328524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.328737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.328748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.328968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.329009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.329268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.329308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.329576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.329587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.329870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.329883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.330175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.330215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.330558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.330598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.330926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.330973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.331275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.331315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.331539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.331580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.331915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.331956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.332220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.332259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.332587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.332628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.332982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.333022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.333298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.333310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.333456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.333468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.333683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.333695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.334014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.334054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.334328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.334369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.334661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.334674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.334854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.334894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.335092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.335133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.335434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.654 [2024-06-11 14:07:58.335502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.654 qpair failed and we were unable to recover it. 00:40:05.654 [2024-06-11 14:07:58.335839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.335880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.336235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.336275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.336639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.336681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.336961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.337001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.337338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.337378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.337648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.337660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.337871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.337882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.338162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.338208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.338494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.338535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.338809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.338849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.339128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.339168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.339447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.339458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.339626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.339638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.339865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.339905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.340171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.340212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.340484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.340497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.340708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.340719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.340967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.340979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.341213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.341225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.341374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.341386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.341598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.341639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.341917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.341956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.342246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.342285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.342641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.342683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.342865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.343189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.343229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.343555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.343567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.343779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.343792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.343952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.343964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.344246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.344258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.344475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.344497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.344748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.344760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.345000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.345012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.345231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.345243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.345441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.345452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.345606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.345619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.345943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.345955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.346240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.655 [2024-06-11 14:07:58.346468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.655 [2024-06-11 14:07:58.346484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.655 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.346799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.346810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.347011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.347023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.347320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.347332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.347571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.347583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.347800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.347812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.348024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.348037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.348236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.348248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.348487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.348528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.348757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.348798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.349017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.349057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.349313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.349325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.349628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.349670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.349968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.350262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.350438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.350584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.350740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.350958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.350971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.351242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.351283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.351544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.351584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.351864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.351905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.352173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.352213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.352467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.352482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.352770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.352782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.353070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.353109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.353462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.353515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.353724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.353736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.353954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.353965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.354216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.354256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.354552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.354594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.354879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.354918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.355217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.355229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.355441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.355453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.355666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.355678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.355962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.355974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.356211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.356224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.356507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.356520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.356730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.656 [2024-06-11 14:07:58.356743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.656 qpair failed and we were unable to recover it. 00:40:05.656 [2024-06-11 14:07:58.356980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.356993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.357210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.357222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.357424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.357436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.357586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.357598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.357862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.357874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.358141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.358181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.358392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.358434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.358721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.358733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.359043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.359055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.359269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.359556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.359568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.359725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.359737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.359948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.359961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.360260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.360299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.360560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.360601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.360830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.360871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.361164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.361203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.361413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.361425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.361695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.361709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.361924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.361936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.362228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.362269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.362468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.362797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.362837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.363194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.363233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.363490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.363503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.363717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.363728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.364016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.364028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.364263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.364276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.364428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.364440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.364601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.364613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.364839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.364880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.365217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.365256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.365599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.365611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.365835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.365848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.366084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.366123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.366399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.366438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.366742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.366783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.367081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.367111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.657 [2024-06-11 14:07:58.367335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.657 [2024-06-11 14:07:58.367347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.657 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.367551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.367563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.367848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.367861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.367968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.367980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.368192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.368232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.368526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.368768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.368807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.369029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.369070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.369352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.369391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.369668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.369709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.369993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.370034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.370349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.370361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.370634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.370646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.370885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.370897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.371051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.371063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.371274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.371286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.371496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.371508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.371820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.371862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.372201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.372213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.372485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.372498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.372776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.372790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.372989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.373001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.373309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.373322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.373587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.373600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.373835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.373848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.374002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.374014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1676377 Killed "${NVMF_APP[@]}" "$@" 00:40:05.658 [2024-06-11 14:07:58.374214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.374226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.374359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.374370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.374533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.374545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 [2024-06-11 14:07:58.374713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.374725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:40:05.658 [2024-06-11 14:07:58.375018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.658 [2024-06-11 14:07:58.375030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.658 qpair failed and we were unable to recover it. 00:40:05.658 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:05.658 [2024-06-11 14:07:58.375321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.375334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:05.659 [2024-06-11 14:07:58.375627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.375640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:05.659 [2024-06-11 14:07:58.375903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.375916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.659 [2024-06-11 14:07:58.376131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.376144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.376459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.376471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.376675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.376687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.376895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.376907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.377172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.377184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.377315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.377327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.377606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.377618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.377912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.377924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.378090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.378102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.378325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.378338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.378556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.378568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.378719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.378731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.378999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.379011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.379212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.379224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.379375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.379618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.379631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.379828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.379840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.380034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.380046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.380327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.380340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.380627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.380639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.380771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.380783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.381050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.381062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.381228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.381240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.381463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.381475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.381628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.381641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.381916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.381928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.382216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.382228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.382466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.382482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.382769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.382781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.383004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.383016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.383226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.383238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.383508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.383521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.383750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.383762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.383971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.383983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.659 qpair failed and we were unable to recover it. 00:40:05.659 [2024-06-11 14:07:58.384310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.659 [2024-06-11 14:07:58.384323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1677209 00:40:05.660 [2024-06-11 14:07:58.384467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.384483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.384634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.384646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1677209 00:40:05.660 [2024-06-11 14:07:58.384870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.384883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:05.660 [2024-06-11 14:07:58.385088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.385100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1677209 ']' 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.385319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.385331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.660 [2024-06-11 14:07:58.385490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.385502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:05.660 [2024-06-11 14:07:58.385707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.385719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.385933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.660 [2024-06-11 14:07:58.385945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.386147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.386160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:05.660 [2024-06-11 14:07:58.386371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.386384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 14:07:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:05.660 [2024-06-11 14:07:58.386553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.386566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.386834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.386847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.386998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.387229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.387241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.387553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.387566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.387792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.387804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.387965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.387977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.388193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.388206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.388429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.388441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.388605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.388617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.388827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.388839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.389110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.389122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.389255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.389267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.389488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.389501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.389719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.389732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.389898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.389910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.390067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.390079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.390365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.390377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.390613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.390624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.390775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.390787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.391006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.391018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.391183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.391195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.391394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.391406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.660 [2024-06-11 14:07:58.391622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.660 [2024-06-11 14:07:58.391634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.660 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.391836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.391848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.392073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.392086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.392390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.392401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.392621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.392633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.392887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.392899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.393029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.393041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.393280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.393291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.393520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.393532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.393744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.393756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.393980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.393992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.394260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.394273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.394565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.394578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.394797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.394809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.395020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.395032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.395241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.395253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.395452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.395695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.395708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.396006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.396018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.396178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.396190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.396401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.396414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.396556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.396568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.396860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.396872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.397070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.397082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.397290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.397567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.397580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.397878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.397890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.398113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.398125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.398345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.398356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.398559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.398571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.398904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.398916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.399139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.399152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.399290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.399302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.399465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.399486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.399699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.399711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.399924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.399936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.400133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.400145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.400352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.400364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.400577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.661 [2024-06-11 14:07:58.400590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.661 qpair failed and we were unable to recover it. 00:40:05.661 [2024-06-11 14:07:58.400807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.400819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.400964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.400976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.401265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.401277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.401496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.401508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.401712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.401724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.402012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.402024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.402224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.402236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.402503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.402515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.402659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.402671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.402884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.402896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.403106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.403118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.403405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.403417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.403633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.403645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.403790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.403802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.404005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.404018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.404284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.404296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.404535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.404548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.404755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.404767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.404899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.404911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.405109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.405123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.405273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.405286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.405498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.405510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.405672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.405684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.405912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.405924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.406147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.406159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.406382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.406394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.406555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.406568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.406835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.406846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.407134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.407146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.407386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.407398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.407683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.407695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.407993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.408005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.408242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.408254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.408471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.408486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.408703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.408715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.408859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.408871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.409079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.409091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.409358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.662 [2024-06-11 14:07:58.409370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.662 qpair failed and we were unable to recover it. 00:40:05.662 [2024-06-11 14:07:58.409467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.409483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.409753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.409766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.409977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.409989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.410256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.410489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.410501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.410721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.410733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.410958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.410971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.411186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.411198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.411398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.411410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.411560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.411573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.411784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.411796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.412017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.412029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.412283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.412295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.412446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.412458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.412712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.412725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.412860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.412872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.413100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.413112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.413346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.413358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.413594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.413606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.413817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.413829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.414097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.414109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.414334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.414348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.414505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.414518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.414762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.414774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.414978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.414990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.415274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.415286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.415495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.415507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.415737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.415749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.416014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.416026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.416305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.416317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.416586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.416599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.416816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.416828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.417096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.663 [2024-06-11 14:07:58.417108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.663 qpair failed and we were unable to recover it. 00:40:05.663 [2024-06-11 14:07:58.417305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.417317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.417640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.417653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.417857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.417870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.417978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.417990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.418256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.418268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.418535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.418547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.418816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.418828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.419094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.419106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.419303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.419315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.419467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.419482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.419728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.419741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.420032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.420044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.420202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.420214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.420514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.420526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.420727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.420739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.421045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.421057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.421271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.421283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.421484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.421496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.421702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.421715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.421928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.421939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.422085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.422097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.422654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.422666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.422803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.422815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.423024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.423037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.423302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.423314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.423523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.423536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.423766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.423976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.423992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.424195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.424207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.424428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.424441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.424601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.424613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.424762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.424773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.424989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.425001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.425135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.425148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.425433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.425445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.425766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.425778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.425923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.425935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.664 [2024-06-11 14:07:58.426188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.664 [2024-06-11 14:07:58.426200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.664 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.426467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.426484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.426707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.426863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.426875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.427095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.427107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.427317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.427328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.427603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.427615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.427827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.427839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.427988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.428000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.428211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.428223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.428458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.428470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.428652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.428664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.428805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.428816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.429085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.429387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.429399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.429604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.429617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.429885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.429897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.430118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.430131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.430257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.430270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.430423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.430435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.430632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.430643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.430845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.430857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.431017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.431228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.431240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.431470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.431486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.431698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.431710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.432021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.432033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.432235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.432247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.432514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.432527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.432823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.432835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.433107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.433121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.433439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.433452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.433691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.433704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.433849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.433861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.434096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.434445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.434457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.434704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.434716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.434989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.435001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.435214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.665 [2024-06-11 14:07:58.435226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.665 qpair failed and we were unable to recover it. 00:40:05.665 [2024-06-11 14:07:58.435369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.435381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.435600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.435613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.435884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.435896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.436120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.436132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.436277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.436289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.436441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.436454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.436658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.436670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.436822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.436834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.437053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.437066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.437265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.437277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.437479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.437492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.437635] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:05.666 [2024-06-11 14:07:58.437691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.437692] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.666 [2024-06-11 14:07:58.437703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.437943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.437954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.438106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.438117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.438328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.438341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.438542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.438554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.438840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.438852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.439149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.439162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.439435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.439447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.439659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.439671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.439897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.440053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.440065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.440307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.440319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.440622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.440635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.440904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.440916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.441127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.441139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.441407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.441420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.441620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.441632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.441794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.441806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.442040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.442052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.442253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.442485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.442500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.442699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.442711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.443007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.443020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.443258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.443269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.443424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.443436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.443645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.443657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.666 qpair failed and we were unable to recover it. 00:40:05.666 [2024-06-11 14:07:58.443878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.666 [2024-06-11 14:07:58.443890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.444119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.444132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.444351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.444363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.444504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.444516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.444716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.444728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.444996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.445008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.445272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.445284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.445425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.445437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.445695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.445707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.445910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.445922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.446204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.446216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.446394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.446406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.446540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.446707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.446719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.447008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.447020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.447261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.447275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.447514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.447527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.447743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.447755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.447908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.447920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.448156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.448168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.448458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.448470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.448630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.448642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.448842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.448855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.449071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.449083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.449362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.449374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.449542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.449672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.449684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.449843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.449856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.450088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.450100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.450314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.450326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.450528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.450540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.450832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.450845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.451062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.451074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.451340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.667 [2024-06-11 14:07:58.451354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.667 qpair failed and we were unable to recover it. 00:40:05.667 [2024-06-11 14:07:58.451527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.451539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.451815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.451827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.451960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.451972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.452119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.452132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.452270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.452281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.452547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.452560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.452716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.452728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.452933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.452946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.453146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.453158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.453377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.453389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.453555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.453567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.453816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.453829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.454041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.454053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.454250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.454263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.454458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.454470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.454708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.454721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.454991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.455267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.455279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.455490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.455503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.455791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.455804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.455972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.455984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.456137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.456149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.456355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.456367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.456599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.456612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.456825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.456837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.457076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.457088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.457376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.457388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.457625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.457638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.457874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.457886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.458095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.458107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.458334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.458346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.458617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.458629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.458792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.458804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.458970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.458982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.459204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.459216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.459446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.459458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.459625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.459637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.668 qpair failed and we were unable to recover it. 00:40:05.668 [2024-06-11 14:07:58.459847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.668 [2024-06-11 14:07:58.459859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.460097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.460109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.460322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.460347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.460657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.460669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.460795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.460806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.460970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.460982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.461180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.461192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.461350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.461361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.461652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.461665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.461910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.461922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.462211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.462223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.462511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.462524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.462812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.462824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.463024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.463036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.463274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.463286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.463551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.463563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.463844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.463856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.464085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.464097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.464258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.464270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.464488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.464648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.464660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.464950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.464962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.465237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.465249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.465449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.465461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.465759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.465771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.466013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.466025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.466223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.466235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.466446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.466458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.466679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.466691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.466975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.466987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.467131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.467142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.467342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.467354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.467654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.467667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.467864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.467876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.468095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.468107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.468314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.468326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.468539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.468551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.468761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.468773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.669 [2024-06-11 14:07:58.469068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.669 [2024-06-11 14:07:58.469081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.669 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.469347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.469359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.469627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.469954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.469966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.470165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.470181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.470329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.470341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.470634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.470646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.470793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.470805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.471035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.471047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.471148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.471160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.471429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.471442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.471720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.471733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.471947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.471959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.472180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.472192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.472399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.472411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.472578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.472591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.472789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.472801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.473067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.473079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.473299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.473312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.473536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.473548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.473750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.473762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.474031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.474043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.474309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.474321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.474612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.474624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.474892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.474905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.475119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.475131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.475365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.475377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.475511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.475524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.475791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.475803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.476016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.476028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.476312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.476324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.476591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.476604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.476816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.476828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.477919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.670 [2024-06-11 14:07:58.477931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.670 qpair failed and we were unable to recover it. 00:40:05.670 [2024-06-11 14:07:58.478096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.478108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.478260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.478271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.478481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.478494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.478780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.478793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.478991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.479003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.479296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.479310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.479530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.479542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.479692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.479704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.479915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.479927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.480141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.480153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.480386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.480398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.480628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.480640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.480779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.480790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.481081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.481093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.481243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.481255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.481523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.481536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.481840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.481985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.481997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.482207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.482219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.482449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.482598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.482610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.482808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.482821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.482967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.482978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.483107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.483119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.483338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.483350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.483652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.483664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.483881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.483893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.484171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.484183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.484410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.484421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.484587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.484599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.484764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.484776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.485043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.485255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.485587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.485600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.485810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.485822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.486037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.486048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.486315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.486327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.486616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.486629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.671 qpair failed and we were unable to recover it. 00:40:05.671 [2024-06-11 14:07:58.486916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.671 [2024-06-11 14:07:58.486928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.487235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.487248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.487459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.487471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.487769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.487782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.488000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.672 [2024-06-11 14:07:58.488012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.488278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.488290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.488439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.488451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.488677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.488690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.488909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.488920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.489203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.489215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.489483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.489496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.489708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.489721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.489934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.489946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.490147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.490447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.490460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.490670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.490682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.490950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.490963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.491253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.491265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.491462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.491474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.491746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.491759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.492046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.492060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.492232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.492243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.492459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.492472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.492697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.492709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.492983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.492996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.493198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.493211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.493363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.493375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.493531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.493543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.493811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.493823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.494023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.494034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.494248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.494261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.494480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.494493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.494731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.672 [2024-06-11 14:07:58.494743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.672 qpair failed and we were unable to recover it. 00:40:05.672 [2024-06-11 14:07:58.495009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.495021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.495290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.495303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.495510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.495522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.495736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.495748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.495965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.495977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.496213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.496225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.496512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.496524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.496726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.496739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.496962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.496974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.497296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.497308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.497513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.497525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.497743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.497755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.497913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.497924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.498222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.498234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.498345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.498357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.498626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.498638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.498800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.498812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.499083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.499096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.499245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.499256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.499407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.499419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.499588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.499600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.499865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.499877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.500090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.500102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.500332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.500344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.500546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.500558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.500827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.500839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.501070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.501081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.501201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.501214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.501501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.501514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.501804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.501817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.502140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.502152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.502383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.502395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.502629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.502641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.502889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.502901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.503050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.503062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.503344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.503356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.503597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.503609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.503902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.503914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.673 qpair failed and we were unable to recover it. 00:40:05.673 [2024-06-11 14:07:58.504031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.673 [2024-06-11 14:07:58.504043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.504206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.504447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.504460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.504758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.504771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.504930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.504942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.505103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.505115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.505314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.505326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.505527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.505738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.505751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.506041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.506053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.506209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.506221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.506424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.506436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.506659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.506671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.506817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.506829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.507115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.507128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.507271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.507283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.507576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.507590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.507793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.507806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.508071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.508083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.508377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.508389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.508604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.508617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.508883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.508895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.509144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.509156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.509381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.509393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.509694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.509706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.509904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.510080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.510092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.510383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.510395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.510615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.510627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.510849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.510861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.511129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.511141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.511355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.511367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.511515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.511527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.511677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.511690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.511954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.511966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.512254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.512267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.512484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.512496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.512591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.512603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.674 [2024-06-11 14:07:58.512871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.674 [2024-06-11 14:07:58.512883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.674 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.513102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.513114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.513313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.513326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.513638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.513651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.513852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.513865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.514163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.514175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.514442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.514455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.514730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.514742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.515028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.515040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.515262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.515275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.515475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.515490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.515645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.515657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.515876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.515888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.516102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.516331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.516343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.516544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.516557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.516850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.516862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.517070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.517081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.517317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.517333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.517530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.517542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.517775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.517787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.517932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.517944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.518159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.518171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.518439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.518681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.518693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.518912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.518924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.519140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.519152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.519293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.519306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.519605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.519618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.519831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.520045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.520057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.520271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.520283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.520434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.520447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.520690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.520703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.520969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.520981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.521193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.521205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.521480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.521492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.521690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.521703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.521902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.675 [2024-06-11 14:07:58.521914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.675 qpair failed and we were unable to recover it. 00:40:05.675 [2024-06-11 14:07:58.522180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.522192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.522351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.522362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.522582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.522595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.522890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.522903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.523057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.523069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.523355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.523367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.523527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.523539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.523805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.523817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.524015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.524027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.524295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.524307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.524525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.524802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.524814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.524946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.524959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.525174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.525187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.525392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.525404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.525719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.526012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.526024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.526317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.526329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.526496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.526508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.526783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.526796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.527061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.527073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.527344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.527357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.527565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.527577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.527778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.527790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.528021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.528034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.528267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.528279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.528481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.528494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.528709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.528722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.528869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.528880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.529029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.529041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.529242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.529254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.529468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.529484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.529703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.529714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.529949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.529961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.530241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.530253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.530519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.530532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.530813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.676 [2024-06-11 14:07:58.530826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.676 qpair failed and we were unable to recover it. 00:40:05.676 [2024-06-11 14:07:58.530964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.530976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.531289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.531301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.531540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.531552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.531705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.531717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.531983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.531995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.532236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.532248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.532513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.532526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.532759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.532771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.533033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.533046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.533332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.533344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.533508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.533520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.533680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.533692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.533910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.533922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.534083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.534095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.534353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.534366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.534565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.534577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.534884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.534896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.535186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.535198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.535488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.535500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.535700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.535712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.677 [2024-06-11 14:07:58.535927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.677 [2024-06-11 14:07:58.535940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.677 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.536214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.536227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.536519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.536533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.536706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.536919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.536931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.537172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.537184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.537337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.537349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.537499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.537511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.537733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.537745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.538037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.538049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.538313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.538325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.538508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.538814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.538827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.539114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.539126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.539368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.539380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.960 [2024-06-11 14:07:58.539535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.960 [2024-06-11 14:07:58.539548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.960 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.539770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.539782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.540046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.540058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.540327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.540339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.540539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.540551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.540764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.540776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.541006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.541018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.541283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.541295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.541533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.541546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.541749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.541761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.542071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.542083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.542301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.542313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.542534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.542710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.542722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.542941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.542953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.543178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.543190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.543430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.543442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.543711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.543724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.543869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.543881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.543997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.544008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.544169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.544181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.544405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.544418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.544712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.544724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.544962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.544974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.545242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.545255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.545541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.545553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.545788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.545800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.545998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.546013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.546222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.546234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.546434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.546446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.546712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.546725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.546949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.547121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.547133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.547343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.547356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.547598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.547610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.547707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.547719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.961 qpair failed and we were unable to recover it. 00:40:05.961 [2024-06-11 14:07:58.547997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.961 [2024-06-11 14:07:58.548010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.548226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.548238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.548503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.548515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.548732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.548745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.549029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.962 [2024-06-11 14:07:58.549039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.549054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.549346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.549358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.549569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.549582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.549805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.549818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.550017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.550030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.550245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.550257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.550489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.550502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.550807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.550821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.551065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.551078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.551277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.551288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.551579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.551592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.551757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.551769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.551978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.551991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.552208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.552221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.552465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.552489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.552735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.552748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.552948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.552962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.553230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.553243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.553459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.553471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.553680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.553693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.553940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.553953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.554168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.554180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.554271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.554283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.554486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.554500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.554643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.554655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.554791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.554803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.555132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.555145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.555316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.555329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.555526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.555539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.555758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.555770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.556037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.556049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.556205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.556217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.962 [2024-06-11 14:07:58.556376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.962 [2024-06-11 14:07:58.556389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.962 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.556493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.556506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.556811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.556823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.557040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.557052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.557253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.557265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.557462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.557474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.557768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.557782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.558071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.558083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.558373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.558388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.558542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.558555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.558726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.558738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.559013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.559026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.559246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.559259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.559425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.559591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.559603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.559839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.559851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.560000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.560012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.560303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.560316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.560453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.560466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.560710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.560723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.560933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.560944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.561160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.561173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.561305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.561317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.561493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.561505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.561717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.561730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.561941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.561954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.562161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.562173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.562387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.562399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.562605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.562617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.562830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.562843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.563069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.563081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.563324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.563336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.563590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.563603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.563777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.563789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.563940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.563952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.564270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.564283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.963 [2024-06-11 14:07:58.564550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.963 [2024-06-11 14:07:58.564562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.963 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.564762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.564774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.565058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.565070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.565359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.565372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.565607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.565620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.565845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.565857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.566126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.566138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.566380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.566392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.566606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.566619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.566778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.566790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.566991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.567003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.567174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.567186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.567462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.567478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.567701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.567713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.567939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.567951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.568261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.568273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.568547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.568559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.568769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.568781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.568953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.568966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.569228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.569240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.569389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.569401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.569691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.569703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.569975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.569987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.570221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.570233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.570495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.570508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.570719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.570731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.570933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.570945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.571106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.571118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.571336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.571348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.571530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.571542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.571741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.571753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.571888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.571900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.572053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.572065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.964 qpair failed and we were unable to recover it. 00:40:05.964 [2024-06-11 14:07:58.572219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.964 [2024-06-11 14:07:58.572231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.572526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.572538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.572825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.572837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.573055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.573068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.573230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.573242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.573464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.573483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.573754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.573767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.574037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.574048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.574267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.574278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.574567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.574580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.574780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.574792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.574992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.575004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.575230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.575243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.575441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.575453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.575668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.575681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.575899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.575911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.576058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.576070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.576305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.576317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.576529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.576542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.576693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.576707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.576975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.576987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.577191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.577203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.577414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.577426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.577689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.577701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.577923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.577935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.578156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.578168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.578375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.578387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.578684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.578696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.578911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.578923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.965 [2024-06-11 14:07:58.579135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.965 [2024-06-11 14:07:58.579147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.965 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.579444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.579456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.579658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.579671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.579942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.579954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.580248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.580261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.580497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.580509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.580724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.580736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.581003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.581015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.581280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.581292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.581557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.581569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.581857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.581870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.582138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.582150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.582349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.582361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.582631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.582644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.582931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.582943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.583092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.583104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.583343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.583355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.583679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.583692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.583902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.583914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.584114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.584126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.584391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.584404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.584682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.584694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.584908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.584920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.585130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.585142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.585367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.585379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.585669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.585681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.585950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.585962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.586131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.586142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.586432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.586445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.586654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.586666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.586819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.586833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.966 [2024-06-11 14:07:58.587055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.966 [2024-06-11 14:07:58.587067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.966 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.587218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.587231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.587475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.587490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.587759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.587772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.587984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.587996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.588213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.588225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.588508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.588521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.588733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.588745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.588967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.588980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.589292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.589305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.589573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.589588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.589854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.589867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.590091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.590106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.590388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.590402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.590570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.590584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.590909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.590923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.591155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.591168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.591448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.591461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.591748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.591761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.592003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.592016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.592153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.592166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.592380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.592393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.592548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.592560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.592789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.592801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.593070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.593083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.593293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.593306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.593520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.593533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.593666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.593679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.593878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.593890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.594155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.594168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.594499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.594512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.594722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.594735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.595011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.595024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.595226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.595239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.595397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.595410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.967 qpair failed and we were unable to recover it. 00:40:05.967 [2024-06-11 14:07:58.595606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.967 [2024-06-11 14:07:58.595618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.595890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.595903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.596187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.596199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.596414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.596426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.596696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.596712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.597001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.597014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.597181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.597194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.597419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.597432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.597593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.597605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.597808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.597820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.598033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.598046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.598216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.598229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.598430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.598442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.598708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.598722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.599938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.599951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.600237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.600249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.600514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.600526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.600792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.600805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.601095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.601107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.601393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.601405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.601722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.601735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.601936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.601948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.602243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.602255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.602465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.602482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.602786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.602798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.602937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.602949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.603166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.603178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.603301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.603312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.603549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.603562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.603715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.968 [2024-06-11 14:07:58.603727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.968 qpair failed and we were unable to recover it. 00:40:05.968 [2024-06-11 14:07:58.603879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.603890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.604205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.604217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.604506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.604519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.604806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.604818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.605106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.605118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.605269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.605282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.605494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.605506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.605823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.605834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.605985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.605997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.606275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.606288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.606499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.606511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.606627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.606639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.606839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.606852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.607049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.607062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.607297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.607309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.607480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.607492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.607634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.607646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.607956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.607969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.608257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.608269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.608564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.608576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.608845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.608857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.609051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.609063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.609275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.609288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.609557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.609570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.609853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.609865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.610083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.610095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.610384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.610397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.610687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.610700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.610848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.610860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.611079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.611091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.611291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.611303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.611502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.611514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.611795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.611807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.969 [2024-06-11 14:07:58.611962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.969 [2024-06-11 14:07:58.611974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.969 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.612191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.612469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.612485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.612690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.612703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.612918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.612930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.613084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.613097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.613256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.613268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.613516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.613528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.613765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.613777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.614014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.614026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.614302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.614314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.614484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.614497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.614654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.614666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.614828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.614839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.615055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.615067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.615229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.615241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.615470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.615486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.615659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.615672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.615963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.615974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.616155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.616167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.616382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.616394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.616684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.616697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.616875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.616887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.617021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.617034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.617338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.617350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.617555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.617567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.617734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.617746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.617959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.617971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.618236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.618248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.618473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.618496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.970 [2024-06-11 14:07:58.618640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.970 [2024-06-11 14:07:58.618651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.970 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.618793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.618805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.619044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.619056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.619321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.619333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.619599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.619611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.619746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.619758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.619997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.620008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.620285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.620298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.620495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.620507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.620774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.620786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.620944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.620956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.621223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.621236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.621444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.621456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.621749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.621763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.621978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.621990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.622278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.622289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.622503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.622516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.622783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.622795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.623108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.623120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.623336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.623348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.623617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.623629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.623788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.623800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.624070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.624081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.624333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.624345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.624553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.624565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.624853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.624865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.625072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.625084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.625289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.625301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.625568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.625580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.625871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.625883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.626094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.626106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.626315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.626327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.626491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.626504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.626657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.626670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.626820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.626832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.627119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.627131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.627329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.971 [2024-06-11 14:07:58.627341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.971 qpair failed and we were unable to recover it. 00:40:05.971 [2024-06-11 14:07:58.627607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.627619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.627771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.627783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.627979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.627990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.628203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.628215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.628423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.628435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.628724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.628736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.629001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.629012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.629161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.629173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.629381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.629393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.629601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.629613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.629925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.629937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.630139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.630151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.630352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.630364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.630587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.630599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.630743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.630755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.631045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.631057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.631163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.631177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.631333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.631345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.631621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.631633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.631838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.631851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.632091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.632103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.632330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.632342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.632484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.632496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.632695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.632707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.632905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.632917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.633134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.633147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.633414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.633426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.633645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.633657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.633809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.633822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.633973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.633985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.634199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.634212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.972 [2024-06-11 14:07:58.634367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.972 [2024-06-11 14:07:58.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.972 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.634594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.634608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.634795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.634808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.635054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.635263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.635499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.635717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635844] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.973 [2024-06-11 14:07:58.635878] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.973 [2024-06-11 14:07:58.635892] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.973 [2024-06-11 14:07:58.635905] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.973 [2024-06-11 14:07:58.635916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.973 [2024-06-11 14:07:58.635940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.635953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.635986] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:05.973 [2024-06-11 14:07:58.636192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.636204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 [2024-06-11 14:07:58.636078] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.636187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:05.973 [2024-06-11 14:07:58.636187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:05.973 [2024-06-11 14:07:58.636473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.636490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.636805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.636817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.637018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.637031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.637242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.637255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.637416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.637429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.637635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.637647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.637900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.637913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.638069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.638081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.638293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.638305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.638534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.638547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.638691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.638703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.638978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.638991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.639152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.639164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.639276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.639288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.639498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.639511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.639778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.639791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.639991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.640112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.640298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.640517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.640740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.640986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.641274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.641505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.973 [2024-06-11 14:07:58.641518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.973 qpair failed and we were unable to recover it. 00:40:05.973 [2024-06-11 14:07:58.641685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.641698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.641850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.641862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.642006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.642021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.642287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.642446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.642458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.642695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.642845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.642858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.643125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.643138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.643405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.643417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.643552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.643565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.643721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.643733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.643890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.643902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.644036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.644048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.644201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.644214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.644463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.644479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.644754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.644767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.644968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.644981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.645247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.645259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.645575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.645589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.645748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.645760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.645993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.646207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.646355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.646583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.646741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.646956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.646969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.647103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.647115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.647264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.647276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.647508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.647520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.647795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.647808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.648033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.648045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.648261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.648273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.648516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.648530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.974 [2024-06-11 14:07:58.648772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.974 [2024-06-11 14:07:58.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.974 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.648987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.649000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.649196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.649209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.649499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.649513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.649661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.649673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.649843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.649995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.650008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.650211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.650224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.650375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.650387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.650537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.650554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.650757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.650771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.651063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.651076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.651220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.651233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.651389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.651402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.651637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.651650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.651869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.651882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.652019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.652032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.652198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.652211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.652506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.652520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.652665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.652678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.652900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.652913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.653150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.653163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.653474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.653503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.653718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.653731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.654035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.654048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.654249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.654263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.654353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.975 [2024-06-11 14:07:58.654366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.975 qpair failed and we were unable to recover it. 00:40:05.975 [2024-06-11 14:07:58.654583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.654598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.654796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.654809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.655854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.655867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.656085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.656098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.656242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.656255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.656454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.656467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.656678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.656692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.656917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.656931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.657078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.657091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.657374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.657388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.657550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.657564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.657776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.657798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.658014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.658027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.658295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.658308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.658440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.658452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.658753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.658766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.659037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.659051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.659268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.659287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.659520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.659533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.659733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.659746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.660036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.660049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.660250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.660263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.660544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.660559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.660824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.660836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.661982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.661994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.662204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.662216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.662487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.662500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.662712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.662724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.976 qpair failed and we were unable to recover it. 00:40:05.976 [2024-06-11 14:07:58.662922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.976 [2024-06-11 14:07:58.662935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.663231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.663245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.663354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.663366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.663599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.663613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.663833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.663846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.664053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.664363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.664375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.664594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.664607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.664716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.664728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.665039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.665051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.665221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.665234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.665393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.665406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.665603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.665617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.665806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.665819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.666018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.666030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.666190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.666202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.666419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.666432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.666678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.666692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.666888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.666901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.667053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.667065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.667295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.667308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.667508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.667521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.667728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.667740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.667870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.667882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.668096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.668325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.668540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.668699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.668861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.668996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.669008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.669206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.669220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.669353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.669366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.669523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.977 [2024-06-11 14:07:58.669536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.977 qpair failed and we were unable to recover it. 00:40:05.977 [2024-06-11 14:07:58.669672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.669684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.669883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.669896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.670188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.670201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.670416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.670429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.670727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.670740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.670961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.670974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.671181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.671193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.671405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.671418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.671620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.671633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.671734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.671746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.671864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.671876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.672024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.672036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.672304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.672318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.672465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.672482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.672608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.672621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.672912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.672925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.673142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.673156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.673370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.673382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.673681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.673695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.673830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.673843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.674130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.674143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.674266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.674278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.674495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.674508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.674644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.674656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.674870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.674883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.675104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.675334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.675347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.675567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.675579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.675720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.675732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.675890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.675901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.676113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.676126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.676350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.676365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.676582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.676595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.676837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.676849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.677070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.677082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.677200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.677212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.677427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.677440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.677584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.978 [2024-06-11 14:07:58.677597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.978 qpair failed and we were unable to recover it. 00:40:05.978 [2024-06-11 14:07:58.677822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.677834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.678984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.678997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.679278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.679291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.679574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.679587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.679856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.679870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.680177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.680191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.680449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.680463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.680791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.680804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.681010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.681023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.681324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.681337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.681657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.681670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.681968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.681981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.682274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.682286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.682490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.682503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.682647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.682660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.682764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.682779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.682943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.682957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.683172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.683184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.683353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.683365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.683535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.683547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.683655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.683667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.683823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.683836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.684055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.684068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.684337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.684352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.684665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.684681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.684914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.684928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.685095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.979 [2024-06-11 14:07:58.685318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.979 [2024-06-11 14:07:58.685331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.979 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.685540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.685554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.685723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.685735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.685949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.685962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.686270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.686284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.686539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.686552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.686782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.686796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.686947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.686959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.687091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.687104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.687372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.687385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.687622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.687635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.687832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.687844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.688063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.688076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.688364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.688377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.688539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.688560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.688800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.688813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.689010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.689022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.689278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.689291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.689559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.689572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.689699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.689711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.689976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.690304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.690317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.690621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.690634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.690848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.690861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.691034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.691046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.691270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.691283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.691575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.691589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.691791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.691804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.692023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.692038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.980 [2024-06-11 14:07:58.692197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.980 [2024-06-11 14:07:58.692209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.980 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.692501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.692515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.692807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.692819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.693033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.693045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.693265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.693278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.693481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.693494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.693796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.693808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.694100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.694114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.694403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.694415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.694710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.694723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.695027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.695039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.695347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.695359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.695626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.695639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.695921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.695934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.696202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.696214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.696437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.696450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.696587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.696600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.696765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.696777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.696977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.696989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.697224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.697237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.697457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.697469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.697775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.697788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.698079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.698091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.698301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.698314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.698552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.698564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.698858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.698871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.699090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.699103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.699365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.699377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.699688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.699701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.699911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.699924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.700208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.700220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.700515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.700527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.700821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.700833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.700997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.701009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.701266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.701279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.701431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.701443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.981 [2024-06-11 14:07:58.701714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.981 [2024-06-11 14:07:58.701726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.981 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.701873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.701886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.702179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.702191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.702420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.702435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.702710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.702723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.702914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.702927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.703164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.703176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.703467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.703485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.703668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.703680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.703896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.703908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.704061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.704073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.704364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.704376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.704665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.704677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.704841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.704854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.705166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.705180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.705448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.705461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.705734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.705746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.705964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.705977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.706266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.706278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.706549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.706562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.706782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.706797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.707082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.707094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.707319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.707332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.707529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.707541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.707762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.707773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.708068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.708080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.708346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.708357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.708622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.708635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.708886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.708898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.709110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.709123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.709442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.709453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.709674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.709687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.709912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.709924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.710077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.710089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.710240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.710252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.710551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.710563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.982 [2024-06-11 14:07:58.710818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.982 [2024-06-11 14:07:58.710830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.982 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.711116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.711128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.711371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.711384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.711549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.711562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.711782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.711794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.712059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.712072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.712304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.712316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.712538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.712553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.712766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.712778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.713020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.713032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.713315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.713328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.713618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.713631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.713920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.713932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.714094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.714106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.714334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.714347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.714565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.714579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.714884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.714896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.715111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.715123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.715284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.715296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.715576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.715589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.715875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.715887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.716090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.716102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.716394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.716407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.716635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.716647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.716845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.716857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.717123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.717139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.717404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.717666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.717678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.717895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.717907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.718135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.718147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.718378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.718390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.718634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.718647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.718912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.718924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.719191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.719204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.719426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.719439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.719655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.719667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.719863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.983 [2024-06-11 14:07:58.719876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.983 qpair failed and we were unable to recover it. 00:40:05.983 [2024-06-11 14:07:58.720143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.720155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.720426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.720439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.720626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.720638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.720966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.720980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.721273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.721285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.721573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.721585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.721763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.721775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.721998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.722010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.722220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.722232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.722443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.722456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.722667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.722682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.722845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.722857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.723162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.723175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.723413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.723647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.723659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.723879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.723891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.724184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.724196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.724536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.724549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.724770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.724782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.725068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.725080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.725269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.725281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.725483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.725496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.725677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.725688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.725965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.725977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.726150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.726163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.726406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.726419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.726665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.726677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.726838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.726850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.727082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.727094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.727361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.727373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.727664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.727677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.727965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.727977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.728262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.728499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.728511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.728725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.728737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.728962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.984 [2024-06-11 14:07:58.728973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.984 qpair failed and we were unable to recover it. 00:40:05.984 [2024-06-11 14:07:58.729303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.729315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.729615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.729628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.729858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.729870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.730069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.730082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.730378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.730391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.730639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.730651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.730883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.730896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.731118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.731130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.731341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.731353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.731645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.731657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.731879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.731891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.732190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.732202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.732429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.732441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.732655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.732667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.732935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.732950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.733182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.733194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.733474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.733490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.733787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.733799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.734090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.734103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.734255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.734267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.734567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.734579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.734730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.734742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.734959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.734972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.735132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.735144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.735437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.735450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.735682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.735695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.735932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.735944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.736163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.736176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.985 qpair failed and we were unable to recover it. 00:40:05.985 [2024-06-11 14:07:58.736449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.985 [2024-06-11 14:07:58.736461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.736706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.736719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.737034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.737047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.737277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.737288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.737512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.737525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.737821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.737833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.738036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.738048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.738345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.738357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.738575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.738587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.738808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.739005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.739017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.739251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.739263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.739570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.739582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.739784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.739796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.740086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.740312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.740324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.740592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.740604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.740747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.740759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.741050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.741062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.741289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.741301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.741570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.741583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.741744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.741756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.741948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.741960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.742181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.742194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.742498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.742510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.742692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.742943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.742958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.743270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.743282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.743572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.743584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.743873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.743886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.744221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.744233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.744501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.744513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.744758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.744770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.744934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.744946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.745236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.745249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.745542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.745555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.986 qpair failed and we were unable to recover it. 00:40:05.986 [2024-06-11 14:07:58.745855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.986 [2024-06-11 14:07:58.745868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.746064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.746076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.746367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.746379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.746665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.746677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.746985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.746997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.747230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.747243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.747514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.747527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.747755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.747768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.748056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.748069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.748334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.748346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.748659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.748671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.748891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.748903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.749123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.749135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.749354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.749366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.749659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.749672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.749872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.749885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.750046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.750058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.750275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.750288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.750500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.750512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.750782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.750794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.751014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.751026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.751245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.751258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.751469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.751486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.751723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.751736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.751952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.751964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.752260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.752272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.752587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.752600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.752887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.752899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.753130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.753142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.753433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.753446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.753637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.753652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.753945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.753957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.754124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.754136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.754364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.754376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.754583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.754595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.987 [2024-06-11 14:07:58.754864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.987 [2024-06-11 14:07:58.754877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.987 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.755141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.755153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.755326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.755339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.755561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.755574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.755793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.755806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.756092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.756104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.756266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.756278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.756562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.756575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.756792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.756803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.756960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.756972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.757185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.757197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.757403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.757415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.757650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.757663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.757929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.757941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.758182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.758195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.758472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.758499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.758747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.758759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.759027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.759039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.759254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.759532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.759545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.759715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.759727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.760021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.760034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.760187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.760199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.760411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.760424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.760655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.760668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.760911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.761228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.761240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.761485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.761498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.761766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.761778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.761941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.761953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.762164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.762177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.762456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.762469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.762716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.762728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.762893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.762906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.763168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.763180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.763473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.763491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.763711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.763723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.763874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.988 [2024-06-11 14:07:58.763886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.988 qpair failed and we were unable to recover it. 00:40:05.988 [2024-06-11 14:07:58.764126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.764137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.764431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.764444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.764650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.764662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.764948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.764960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.765302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.765314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.765624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.765637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.765879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.765891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.766054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.766066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.766258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.766270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.766536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.766548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.766710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.766722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.766942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.766954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.767165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.767177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.767450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.767462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.767639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.767652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.767895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.767907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.768068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.768080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.768388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.768400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.768693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.768705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.768938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.768951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.769200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.769212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.769372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.769384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.769529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.769541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.769806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.769818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.770087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.770099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.770405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.770417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.770730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.770742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.771007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.771019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.771233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.771245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.771453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.771465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.771709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.771721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.771875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.771888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.772179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.772191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.772465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.772480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.772685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.772697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.772988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.989 [2024-06-11 14:07:58.773000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.989 qpair failed and we were unable to recover it. 00:40:05.989 [2024-06-11 14:07:58.773169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.773181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.773403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.773417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.773566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.773578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.773788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.773801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.774091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.774103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.774408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.774419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.774693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.774706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.774984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.774996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.775208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.775220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.775420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.775432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.775729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.775741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.776025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.776037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.776357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.776369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.776582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.776595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.776820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.776833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.777128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.777140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.777404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.777416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.777701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.777714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.777915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.777927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.778147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.778159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.778465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.778481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.778715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.778727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.778947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.778959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.779127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.779138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.779291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.779304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.779598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.779611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.779832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.779844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.780046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.780059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.780270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.990 [2024-06-11 14:07:58.780282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.990 qpair failed and we were unable to recover it. 00:40:05.990 [2024-06-11 14:07:58.780501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.780513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.780683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.780696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.780852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.780865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.781182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.781193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.781494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.781506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.781730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.781742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.781901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.781912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.782070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.782082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.782369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.782381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.782658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.782907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.782920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.783149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.783161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.783472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.783490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.783756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.783768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.784041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.784054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.784310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.784322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.784470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.784485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.784776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.784789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.785012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.785024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.785243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.785255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.785548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.785560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.785845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.785857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.786115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.786127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.786346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.786358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.786561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.786574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.786740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.786752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.786958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.786970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.787185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.787197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.787510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.787523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.787741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.787754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.787907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.787919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.788067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.991 [2024-06-11 14:07:58.788079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.991 qpair failed and we were unable to recover it. 00:40:05.991 [2024-06-11 14:07:58.788351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.788363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.788631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.788643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.788861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.788873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.789096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.789108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.789318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.789330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.789564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.789577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.789851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.789863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.789998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.790010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.790300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.790312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.790543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.790556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.790797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.790809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.791093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.791105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.791418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.791431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.791661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.791673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.791914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.791927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.792092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.792104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.792343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.792355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.792676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.792689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.792913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.792925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.793245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.793256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.793546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.793561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.793846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.793858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.794143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.794155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.794399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.794412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.794563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.794575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.794825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.794838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.795103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.795115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.795326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.795338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.795636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.795648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.795859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.795871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.796093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.796104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.796314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.796326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.796619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.796632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.796856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.796869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.797086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.797098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.797366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.992 [2024-06-11 14:07:58.797378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.992 qpair failed and we were unable to recover it. 00:40:05.992 [2024-06-11 14:07:58.797608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.797620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.797892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.797904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.798058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.798070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.798418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.798430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.798640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.798653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.798893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.798906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.799232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.799244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.799522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.799535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.799834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.799847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.800084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.800096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.800360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.800372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.800591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.800603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.800808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.800820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.801056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.801068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.801358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.801370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.801658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.801670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.801890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.801902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.802124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.802137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.802356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.802368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.802659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.802671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.802937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.802949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.803173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.803186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.803424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.803436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.803644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.803656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.803935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.803950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.804276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.804289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.804555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.804568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.804788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.804800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.805022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.805034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.805364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.805376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.805611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.805624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.805789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.805800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.993 qpair failed and we were unable to recover it. 00:40:05.993 [2024-06-11 14:07:58.805942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.993 [2024-06-11 14:07:58.805954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.806282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.806295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.806497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.806509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.806719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.806731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.806947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.806959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.807196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.807208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.807498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.807510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.807800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.807812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.808078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.808090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.808304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.808317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.808533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.808545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.808721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.808733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.808933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.808946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.809235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.809247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.809556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.809568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.809788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.809800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.810017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.810029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.810359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.810372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.810609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.810631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.810911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.810925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.811143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.811156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.811394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.811406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.811684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.811697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.811948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.811960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.812232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.812244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.812482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.812495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.812714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.812726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.812937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.812949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.813201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.813213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.813432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.813445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.813738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.813750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.813969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.813980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.814313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.814325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.994 [2024-06-11 14:07:58.814547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.994 [2024-06-11 14:07:58.814560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.994 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.814839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.814851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.815162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.815174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.815465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.815481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.815674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.815687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.815932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.815944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.816111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.816123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.816409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.816422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.816767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.816779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.817020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.817032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.817321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.817333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.817621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.817634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.817859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.817871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.818031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.818043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.818242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.818254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.818551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.818563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.818833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.818846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.819067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.819079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.819225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.819237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.819548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.819560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.819844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.819856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.820028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.820040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.820281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.820293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.820512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.820524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.820744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.820757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.821052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.821064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.995 [2024-06-11 14:07:58.821292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.995 [2024-06-11 14:07:58.821306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.995 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.821540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.821553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.821707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.821719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.821938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.821950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.822235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.822247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.822534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.822547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.822866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.822879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.823091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.823103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.823384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.823396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.823640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.823653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.823855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.823868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.824151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.824372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.824384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.824597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.824609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.824845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.824857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.825148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.825160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.825444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.825456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.825781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.825794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.826013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.826026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.826344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.826594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.826606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.826859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.826871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.827112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.827124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.827391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.827403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.827689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.827701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.827968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.827980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.828232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.828245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.828557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.828569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.828788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.828800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.828963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.828975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.829225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.829238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.829535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.829547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.829845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.829857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.830103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.996 [2024-06-11 14:07:58.830302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.996 [2024-06-11 14:07:58.830314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.996 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.830604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.830907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.830919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.831149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.831162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.831403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.831415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.831658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.831670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.831898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.831911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.832234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.832246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.832511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.832524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.832695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.832708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.832848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.832860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.833148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.833160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.833449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.833462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.833627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.833639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.833874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.833885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.834088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.834099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.834391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.834403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.834634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.834647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.834828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.834840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.835109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.835121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.835411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.835424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.835718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.835730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.835940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.835952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.836244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.836256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.836474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.836490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.836786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.836798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.837084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.837097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.837428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.837440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.837653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.837665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.837956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.837968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.838227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.838239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.838437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.838449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.838755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.838767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.838937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.838949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.839151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.839445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.839457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.839756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.997 [2024-06-11 14:07:58.839769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.997 qpair failed and we were unable to recover it. 00:40:05.997 [2024-06-11 14:07:58.839930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.839943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.840170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.840182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.840469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.840486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.840780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.840793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.841065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.841077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.841343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.841355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.841554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.841566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.841809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.841822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.842088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.842100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.842267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.842282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.842493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.842506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.842781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.842794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.843062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.843075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.843358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.843370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.843516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.843528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.843696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.843708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.843868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.843880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.844174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.844186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.844463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.844481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.844733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.844746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.845013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.845024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.845315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.845328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.845616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.845629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:05.998 [2024-06-11 14:07:58.845927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:05.998 [2024-06-11 14:07:58.845939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:05.998 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.846087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.846099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.846336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.846348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.846573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.846585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.846832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.846845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.847061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.847074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.847232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.847248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.847520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.847549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.847745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.847758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.847915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.847927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.848164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.848177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.848397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.848412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.848642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.848656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.277 [2024-06-11 14:07:58.848873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.277 [2024-06-11 14:07:58.848886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.277 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.849051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.849065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.849230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.849243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.849464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.849482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.849755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.849768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.849988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.850000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.850233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.850245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.850491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.850504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.850731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.850743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.850967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.850979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.851200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.851212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.851509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.851522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.851673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.851685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.851946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.851960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.852250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.852262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.852469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.852486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.852769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.852781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.853047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.853059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.853265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.853277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.853579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.853591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.853813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.853825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.854046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.854058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.854214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.854227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.854496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.854508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.854806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.854818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.855047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.855060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.855289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.855301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.855554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.855567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.855790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.855802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.856022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.856034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.856174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.856186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.856401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.856593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.856605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.856838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.856850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.857060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.857072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.857287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.857300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.857506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.857518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.857738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.278 [2024-06-11 14:07:58.857749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.278 qpair failed and we were unable to recover it. 00:40:06.278 [2024-06-11 14:07:58.857907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.857919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.858202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.858215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.858460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.858472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.858665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.858677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.858897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.858909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.859121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.859133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.859461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.859473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.859802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.859959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.859971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.860300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.860313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.860526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.860538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.860779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.860791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.861026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.861038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.861184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.861197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.861354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.861366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.861599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.861613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.861833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.861846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.862011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.862024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.862249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.862261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.862498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.862510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.862731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.862743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.862917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.862930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.863129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.863141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.863425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.863437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.863716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.863729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.863953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.863966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.864279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.864291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.864509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.864522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.864766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.864779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.864997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.865010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.865156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.865168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.865461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.865473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.865696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.865708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.865924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.865936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.866114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.866126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.866327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.866339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.866558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.866570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.866727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.279 [2024-06-11 14:07:58.866740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.279 qpair failed and we were unable to recover it. 00:40:06.279 [2024-06-11 14:07:58.866952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.866964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.867253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.867265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.867409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.867421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.867675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.867687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.868003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.868015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.868251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.868262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.868430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.868442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.868673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.868685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.868860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.868873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.869046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.869059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.869281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.869292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.869610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.869623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.869935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.869947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.870246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.870258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.870526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.870538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.870762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.870774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.870985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.870996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.871193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.871208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.871487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.871499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.871668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.871681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.871832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.871845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.872069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.872310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.872322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.872630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.872643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.872787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.873004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.873016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.873215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.873227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.873518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.873531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.873780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.873792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.874015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.874027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.874349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.874361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.874657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.874669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.874890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.874902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.875117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.875129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.875420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.875432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.875651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.875884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.875896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.280 [2024-06-11 14:07:58.876095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.280 [2024-06-11 14:07:58.876108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.280 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.876520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.876533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.876767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.876919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.876931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.877161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.877173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.877372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.877384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.877594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.877607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.877784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.877796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.877992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.878004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.878152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.878164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.878366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.878378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.878639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.878651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.878918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.878931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.879210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.879223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.879516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.879528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.879749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.879761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.879980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.879992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.880303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.880315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.880557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.880569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.880837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.880849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.881076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.881091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.881373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.881385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.881678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.881691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.881841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.881853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.882072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.882084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.882285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.882297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.882518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.882530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.882797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.882809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.883134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.883147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.883345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.883358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.883676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.883689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.883853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.884131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.884143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.884429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.884441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.884710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.884723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.884939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.884951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.885251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.885264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.885499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.885512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.281 [2024-06-11 14:07:58.885758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.281 [2024-06-11 14:07:58.885771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.281 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.885987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.885999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.886326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.886338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.886514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.886526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.886695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.886708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.886926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.886938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.887229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.887241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.887528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.887541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.887780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.887793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.888083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.888096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.888313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.888325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.888591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.888604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.888873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.888886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.889119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.889131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.889409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.889421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.889711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.889723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.889955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.889967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.890178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.890190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.890484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.890497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.890786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.890798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.891040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.891052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.891265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.891277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.891606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.891621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.891840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.891852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.892170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.892182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.892406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.892418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.892708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.892721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.892928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.892941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.893247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.893260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.893455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.893467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.893670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.282 [2024-06-11 14:07:58.893682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.282 qpair failed and we were unable to recover it. 00:40:06.282 [2024-06-11 14:07:58.893880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.894100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.894112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.894379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.894390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.894688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.894700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.894926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.894939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.895157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.895169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.895447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.895459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.895692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.895705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.895926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.895938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.896114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.896126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.896326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.896338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.896574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.896586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.896807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.896819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.897060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.897073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.897384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.897396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.897667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.897679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.897841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.897853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.898073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.898085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.898301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.898313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.898603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.898616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.898815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.898827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.898978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.898990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.899286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.899298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.899560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.899572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.899798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.899810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.900031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.900044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.900283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.900296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.900587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.900600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.900904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.901135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.901148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.901344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.901356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.901566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.901580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.901873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.901885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.902105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.902118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.902329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.902341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.902599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.902612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.902890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.902902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.903188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.903200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.283 [2024-06-11 14:07:58.903404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.283 [2024-06-11 14:07:58.903416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.283 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.903711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.903723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.903992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.904004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.904222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.904234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.904402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.904414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.904657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.904670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.905004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.905016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.905193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.905206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.905493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.905505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.905823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.905835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.906049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.906061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.906305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.906317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.906518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.906530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.906741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.906753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.906992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.907004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.907223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.907236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.907433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.907445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.907658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.907671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.907827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.907839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.908064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.908076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.908363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.908375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.908654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.908667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.908956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.908969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.909124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.909136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.909459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.909472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.909762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.909774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.910064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.910076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.910355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.910367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.910535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.910547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.910744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.910756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.910953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.910965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.911170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.911182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.911468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.911483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.911702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.911716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.911986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.911998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.912232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.912245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.284 [2024-06-11 14:07:58.912545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.284 [2024-06-11 14:07:58.912557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.284 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.912774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.912787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.913005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.913017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.913321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.913334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.913595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.913608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.913886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.913897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.914192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.914204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.914505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.914518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.914706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.914901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.914913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.915133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.915145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.915438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.915450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.915721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.915733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.915880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.915892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.916102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.916114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.916332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.916345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.916629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.916642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.916840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.916852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.917118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.917130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.917407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.917420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.917688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.917700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.917879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.917891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.918122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.918134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.918419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.918431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.918640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.918653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.918812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.918824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.919041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.919053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.919287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.919299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.919505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.919517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.919677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.919689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.919848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.919860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.920065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.920077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.920342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.920354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.920583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.920596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.920883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.920895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.921141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.921153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.921377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.921676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.921690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.285 [2024-06-11 14:07:58.921940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.285 [2024-06-11 14:07:58.921952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.285 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.922258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.922271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.922563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.922575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.922791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.922803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.923093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.923105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.923380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.923392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.923686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.923699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.923915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.923927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.924171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.924183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.924439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.924451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.924617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.924630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.924865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.924877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.925175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.925187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.925414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.925426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.925722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.925734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.925888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.925900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.926120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.926132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.926422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.926434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.926659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.926671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.926893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.926906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.927119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.927131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.927423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.927436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.927731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.927967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.927979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.928211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.928223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.928444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.928456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.928671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.928683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.928850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.928862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.929028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.929040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.929264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.929278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.929566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.929579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.286 qpair failed and we were unable to recover it. 00:40:06.286 [2024-06-11 14:07:58.929863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.286 [2024-06-11 14:07:58.929876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.930122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.930134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.930401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.930413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.930705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.930717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.930982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.930994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.931244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.931256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.931544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.931557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.931763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.931776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.932063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.932077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.932305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.932317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.932586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.932599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.932837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.932849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.933068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.933080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.933250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.933262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.933517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.933530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.933749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.933762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.934063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.934075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.934205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.934217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.934440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.934453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.934693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.934705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.934866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.934878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.935148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.935161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.935460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.935472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.935792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.935804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.935946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.935958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.936243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.936255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.936473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.936489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.936784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.936796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.936946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.936958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.937294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.937306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.937621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.937633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.937834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.937846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.938106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.938118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.938330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.938342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.938658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.938671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.938920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.938934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.939212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.287 [2024-06-11 14:07:58.939224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.287 qpair failed and we were unable to recover it. 00:40:06.287 [2024-06-11 14:07:58.939487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.939499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.939700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.939713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.939932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.939944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.940102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.940114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.940329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.940341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.940677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.940690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.940857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.940869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.941184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.941197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.941344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.941356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.941603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.941615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.941770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.941782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.942047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.942059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.942278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.942290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.942570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.942583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.942870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.942882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.943066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.943078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.943315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.943326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.943641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.943654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.943857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.943869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.944137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.944150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.944436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.944448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.944695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.944707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.944866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.944877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.945144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.945156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.945372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.945608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.945620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.945788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.945800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.946009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.946021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.946254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.946266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.946545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.946558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.946825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.946837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.946995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.947008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.947250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.947262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.947485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.947498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.947648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.947660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.947901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.947913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.288 [2024-06-11 14:07:58.948179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.288 [2024-06-11 14:07:58.948191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.288 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.948488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.948500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.948659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.948674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.948984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.949286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.949299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.949523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.949535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.949807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.949819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.950087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.950099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.950368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.950380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.950660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.950672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.950931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.950944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.951255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.951267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.951538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.951550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.951714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.951726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.951991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.952003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.952299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.952311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.952581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.952593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.952811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.952823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.953097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.953109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.953419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.953431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.953670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.953683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.953853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.953865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.954095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.954108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.954361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.954373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.954608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.954620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.954783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.954795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.955038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.955050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.955276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.955288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.955526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.955539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.955716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.955732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.956014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.956027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.956288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.956301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.956512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.956525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.956796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.956808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.957024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.957036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.957344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.957356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.957632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.957644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.957939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.289 [2024-06-11 14:07:58.957951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.289 qpair failed and we were unable to recover it. 00:40:06.289 [2024-06-11 14:07:58.958115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.958127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.958367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.958379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.958647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.958660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.958824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.958836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.959127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.959143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.959387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.959400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.959618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.959630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.959839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.959851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.960000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.960012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.960232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.960244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.960531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.960543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.960699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.960711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.960981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.960994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.961237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.961248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.961518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.961530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.961690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.961702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.961919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.961931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.962142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.962154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.962395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.962408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.962620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.962632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.962853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.962865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.963134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.963146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.963416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.963428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.290 [2024-06-11 14:07:58.963647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.290 [2024-06-11 14:07:58.963660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.290 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.963880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.963892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.964131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.964143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.964454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.964466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.964802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.964814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.965110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.965122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.965420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.965432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.965591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.965603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.965816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.965829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.966035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.966047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.966281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.966292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.966597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.966609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.966812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.966825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.967113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.967125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.967345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.967356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.967566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.967578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.967808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.967820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.968041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.968053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.968355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.968366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.968568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.968581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.968869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.968881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.969102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.969117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.969389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.969401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.969695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.969708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.969879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.969890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.291 qpair failed and we were unable to recover it. 00:40:06.291 [2024-06-11 14:07:58.970205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.291 [2024-06-11 14:07:58.970218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.970512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.970524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.970679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.970692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.970935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.970947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.971163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.971398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.971410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.971627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.971639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.971850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.971862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.972025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.972037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.972169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.972181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.972458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.972470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.972676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.972688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.972907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.972920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.292 qpair failed and we were unable to recover it. 00:40:06.292 [2024-06-11 14:07:58.973138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.292 [2024-06-11 14:07:58.973151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.973373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.973385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.973658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.973670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.973966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.973978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.974200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.974212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.974513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.974525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.974702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.974714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.975012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.975024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.975311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.975323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.975546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.975559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.975829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.975842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.976088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.976100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.976389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.976402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.976683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.976695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.976985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.976997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.977229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.977241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.977398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.977410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.977641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.977654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.977865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.977877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.978037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.978049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.978332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.978344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.978656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.978669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.978818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.978830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.979041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.979055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.979348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.979360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.979591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.979603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.979769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.979781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.979984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.979996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.980157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.980170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.980379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.980392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.980670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.980683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.980975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.980988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.981294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.981306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.981575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.981588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.981874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.293 [2024-06-11 14:07:58.981886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.293 qpair failed and we were unable to recover it. 00:40:06.293 [2024-06-11 14:07:58.982048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.982061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.982382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.982394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.982681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.982694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.982912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.982924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.983197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.983210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.983502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.983514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.983680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.983692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.983957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.983969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.984287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.984299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.984619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.984631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.984859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.984871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.985098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.985111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.985340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.985664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.985677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.985931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.985943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.986106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.986118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.986387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.986399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.986551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.986564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.986858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.986869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.987090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.987102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.987316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.987329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.987491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.987503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.987659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.987671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.987939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.987952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.988271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.988284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.988510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.988522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.988733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.988745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.988961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.988973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.989313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.989327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.989612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.989625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.989842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.989855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.990119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.990131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.990419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.990432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.990611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.990623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.990898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.990910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.991180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.991192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.991455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.991468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.991746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.991758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.992053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.992066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.992221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.992233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.992437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.992450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.294 [2024-06-11 14:07:58.992740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.294 [2024-06-11 14:07:58.992753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.294 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.992963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.992975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.993288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.993301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.993448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.993460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.993744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.993757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.993928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.993940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.994149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.994160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.994435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.994448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.994762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.994774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.995047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.995058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.995336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.995349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.995570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.995583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.995869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.995881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.996148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.996160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.996409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.996421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.996699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.996711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.996934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.996946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.997182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.997195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.997443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.997456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.997677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.997690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.997987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.998001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.998285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.998298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.998536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.998548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.998788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.998801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.999014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.999026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.999359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.999371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.999658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.999670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:58.999867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:58.999882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.000101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.000113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.000399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.000411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.000692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.000705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.000940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.000952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.001263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.001275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.001519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.001531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.001820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.001832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.002096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.295 [2024-06-11 14:07:59.002108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.295 qpair failed and we were unable to recover it. 00:40:06.295 [2024-06-11 14:07:59.002275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.002287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.002427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.002439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.002749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.002762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.002984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.002996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.003217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.003229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.003377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.003389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.003668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.003681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.003928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.003940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.004112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.004125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.004436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.004448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.004713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.004726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.004941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.005170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.005183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.005470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.005485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.005640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.005652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.005866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.005879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.006147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.006160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.006381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.006394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.006687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.006700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.006941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.006954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.007205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.007217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.007501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.007514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.007733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.007745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.008034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.008046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.008354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.008366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.008666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.008678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.008968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.008981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.009281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.009294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.009610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.009622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.009773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.009786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.009952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.009964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.010232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.010247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.010461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.010474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.010717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.010729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.010965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.010976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.011247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.011260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.011482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.011494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.011713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.011724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.011926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.011939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.012149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.012161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.012451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.012463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.012785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.296 [2024-06-11 14:07:59.012798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.296 qpair failed and we were unable to recover it. 00:40:06.296 [2024-06-11 14:07:59.013064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.013077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.013376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.013389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.013636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.013649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.013870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.013882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.014119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.014131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.014432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.014445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.014693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.014705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.014996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.015008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.015218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.015230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.015520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.015533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.015816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.015828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.016043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.016054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.016337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.016349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.016667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.016680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.016945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.016958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.017180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.017192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.017484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.017497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.017656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.017669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.017957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.017970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.018242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.018254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.018538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.018551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.018851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.018864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.019141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.019153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.019420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.019432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.019666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.019678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.019942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.019955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.020229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.020242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.020460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.297 [2024-06-11 14:07:59.020472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.297 qpair failed and we were unable to recover it. 00:40:06.297 [2024-06-11 14:07:59.020786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.020798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.021024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.021038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.021256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.021466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.021484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.021702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.021714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.021950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.021962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.022250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.022263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.022574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.022586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.022801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.022813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.023110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.023123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.023414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.023426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.023649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.023661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.023883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.023895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.024206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.024218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.024456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.024468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.024700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.024712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.024931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.024943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.025251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.025264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.025488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.025500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.025672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.025684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.025966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.025979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.026292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.026304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.026519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.026532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.026800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.026812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.027107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.027266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.027278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.027568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.027581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.027870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.027882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.028169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.028182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.028462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.028475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.028711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.028723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.028882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.028894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.029174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.029186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.029401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.029413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.029588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.029600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.029908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.298 [2024-06-11 14:07:59.029921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.298 qpair failed and we were unable to recover it. 00:40:06.298 [2024-06-11 14:07:59.030065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.030076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.030295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.030307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.030505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.030516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.030687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.030700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.030862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.030874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.031085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.031099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.031383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.031395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.031693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.031705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.031964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.031976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.032148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.032160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.032313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.032325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.032640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.032653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.032859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.032871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.033183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.033196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.033454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.033465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.033740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.033752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.033970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.033982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.034280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.034292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.034584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.034595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.034807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.034820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.034966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.034978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.035228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.035240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.035558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.035570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.035794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.035806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.036005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.036017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.036395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.036407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.036719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.036731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.036955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.036967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.037183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.037196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.037466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.037482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.037703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.037716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.038102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.038114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.038256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.038268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.038554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.299 [2024-06-11 14:07:59.038566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.299 qpair failed and we were unable to recover it. 00:40:06.299 [2024-06-11 14:07:59.038808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.038820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.039036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.039048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.039278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.039291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.039513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.039525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.039765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.039777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.040054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.040066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.040374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.040386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.040695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.040707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.040921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.040933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.041148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.041160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.041362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.041377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.041599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.041613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.041851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.041863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.042070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.042082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.042234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.042246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.042483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.042496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.042788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.042800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.043020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.043032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.043325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.043338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.043599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.043611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.043853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.043865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.044083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.044095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.044403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.044416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.044633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.044645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.044921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.044933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.045144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.045385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.045397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.045611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.045624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.045903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.045915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.046173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.046185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.046452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.046465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.046744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.046757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.047022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.047034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.047204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.047216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.047488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.047501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.047737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.047749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.047986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.047998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.048327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.048339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.048610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.048623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.048845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.048857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.049021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.300 [2024-06-11 14:07:59.049033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.300 qpair failed and we were unable to recover it. 00:40:06.300 [2024-06-11 14:07:59.049305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.049317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.049473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.049776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.049788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.050018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.050030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.050312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.050324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.050502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.050515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.050727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.050739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.050888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.050899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.051169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.051181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.051484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.051498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.051787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.051800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.052027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.052040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.052278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.052290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.052593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.052606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.052871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.052883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.053147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.053159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.053357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.053369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.053684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.053697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.053957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.054184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.054196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.054488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.054501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.054736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.054749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.055013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.055025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.055292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.055305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.055592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.055605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.055832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.055844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.056107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.056119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.056403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.056416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.056580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.056592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.056767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.056779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.056923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.056935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.057249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.057262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.057549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.057561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.057720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.057733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.057946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.057958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.058157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.058169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.058404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.058714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.058726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.059012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.059387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.059399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.301 qpair failed and we were unable to recover it. 00:40:06.301 [2024-06-11 14:07:59.059674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.301 [2024-06-11 14:07:59.059688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.059935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.059947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.060151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.060163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.060480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.060493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.060708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.060721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.060990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.061003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.061321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.061333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.061555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.061568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.061736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.061973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.061990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.062258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.062270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.062486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.062498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.062698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.062711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.062957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.062970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.063130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.063141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.063430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.063442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.063718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.063731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.064023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.064036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.064271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.064283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.064552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.064565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.064842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.064855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.065044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.065056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.065342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.065354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.065674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.065687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.065904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.065916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.066139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.066152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.066376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.066388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.066606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.066619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.066842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.066855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.067140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.067153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.067419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.067432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.067633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.067645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.067885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.302 [2024-06-11 14:07:59.067897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.302 qpair failed and we were unable to recover it. 00:40:06.302 [2024-06-11 14:07:59.068114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.068127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.068413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.068425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.068661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.068673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.068980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.068992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.069268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.069281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.069498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.069510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.069730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.069743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.069913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.069925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.070079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.070091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.070315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.070328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.070555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.070567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.070783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.070795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.070943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.070956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.071173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.071185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.071488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.071501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.071655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.071667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.071982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.071995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.072258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.072270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.072518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.072531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.072744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.072757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.072979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.072991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.073251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.073263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.073461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.073473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.073724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.073735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.074004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.074016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.074299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.074312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.074549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.074562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.074814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.074827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.075001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.075013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.075307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.075319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.075534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.075547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.075769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.075781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.075950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.075963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.076149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.076161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.076378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.076390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.076667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.076680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.076970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.076982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.077219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.077231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.077449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.077461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.077603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.077615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.077859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.077872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.078145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.078157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.078465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.078482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.078767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.078782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.079002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.079014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.303 [2024-06-11 14:07:59.079305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.303 [2024-06-11 14:07:59.079317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.303 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.079596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.079609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.079820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.079832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.080052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.080065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.080372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.080384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.080584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.080597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.080818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.080831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.081099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.081111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.081377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.081390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.081684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.081697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.081915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.081927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.082149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.082162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.082454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.082467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.082693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.082706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.083018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.083030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.083332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.083344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.083632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.083644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.083861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.083874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.084175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.084186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.084398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.084411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.084584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.084596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.084755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.084768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.085080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.085093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.085361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.085374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.085756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.085768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.085990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.086002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.086269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.086281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.086502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.086515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.086734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.086746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.086966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.086978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.087239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.087543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.087557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.087869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.087881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.088093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.088106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.088321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.088333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.088574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.088587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.088755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.088767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.088986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.088998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.089141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.089155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.304 qpair failed and we were unable to recover it. 00:40:06.304 [2024-06-11 14:07:59.089453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.304 [2024-06-11 14:07:59.089467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.089694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.089707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.089910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.089922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.090160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.090173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.090472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.090489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.090636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.090649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.090859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.090871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.091019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.091031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.091294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.091584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.091597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.091863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.091875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.092034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.092046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.092342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.092356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.092579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.092591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.092906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.092919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.093134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.093146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.093437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.093449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.093755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.093767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.093979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.093992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.094185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.094197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.094522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.094535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.094817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.094830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.095056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.095068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.095283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.095295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.095593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.095606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.095821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.096012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.096024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.096260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.096272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.096493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.096506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.096725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.096737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.096885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.096897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.097174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.097187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.097483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.097496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.097739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.097751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.097903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.097915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.098159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.098172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.098372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.098385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.098625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.098637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.098849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.098861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.099012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.099026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.099173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.099186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.099337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.305 [2024-06-11 14:07:59.099350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.305 qpair failed and we were unable to recover it. 00:40:06.305 [2024-06-11 14:07:59.099518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.099531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.099771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.099784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.099917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.099929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.100143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.100156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.100320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.100332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.100540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.100552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.100822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.100835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.100984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.100996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.101196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.101208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.101414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.101427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.101668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.101681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.101882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.101895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.102942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.102954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.103100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.103113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.103233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.103245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.103515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.103528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.103743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.103755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.103969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.103982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.104200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.104213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.104355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.104367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.104518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.104531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.104841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.104854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.105009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.105021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.105175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.105188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.306 [2024-06-11 14:07:59.105348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.306 [2024-06-11 14:07:59.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.306 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.105586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.105598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.105750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.105976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.105988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.106137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.106363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.106583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.106693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.106856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.106994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.107008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.107225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.107237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.107527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.107539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.107787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.107799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.107980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.107993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.108142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.108153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.108352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.108364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.108564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.108576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.108748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.108760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.108995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.109007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.109221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.109234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.109439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.109451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.109600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.109612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.109778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.109791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.109990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.110002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.110219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.110231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.110449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.110756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.110769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.110988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.111000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.111267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.111279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.111433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.111446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.111617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.111629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.111896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.111909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.112057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.112247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.112259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.112481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.112494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.112715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.112728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.112875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.112887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.113111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.113124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.113254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.113265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.113452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.113465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.113606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.113619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.113816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.113829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.114030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.114205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.114217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.114415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.114429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.114582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.114595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.114860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.307 [2024-06-11 14:07:59.114873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.307 qpair failed and we were unable to recover it. 00:40:06.307 [2024-06-11 14:07:59.115013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.115025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.115331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.115559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.115571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.115789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.115801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.116069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.116082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.116298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.116310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.116463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.116475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.116746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.116758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.116958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.116971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.117128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.117140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.117360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.117372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.117549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.117562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.117772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.117784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.117944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.117957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.118104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.118117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.118407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.118420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.118571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.118583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.118737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.118749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.118906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.118919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.119117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.119131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.119293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.119305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.119597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.119610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.119814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.119826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.119989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.120001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.120285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.120297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.120522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.120534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.120753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.120766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.120988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.121000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.121339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.121351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.121588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.121601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.121804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.121816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.121967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.121978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.122278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.122290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.122505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.122517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.122686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.122698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.122863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.122874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.123101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.123114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.123287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.123299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.123468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.123485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.123673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.123684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.123845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.124081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.124095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.308 qpair failed and we were unable to recover it. 00:40:06.308 [2024-06-11 14:07:59.124313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.308 [2024-06-11 14:07:59.124325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.309 qpair failed and we were unable to recover it. 00:40:06.309 [2024-06-11 14:07:59.124539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.309 [2024-06-11 14:07:59.124552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.309 qpair failed and we were unable to recover it. 00:40:06.309 [2024-06-11 14:07:59.124763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.309 [2024-06-11 14:07:59.124775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.309 qpair failed and we were unable to recover it. 00:40:06.309 [2024-06-11 14:07:59.125051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.309 [2024-06-11 14:07:59.125063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.309 qpair failed and we were unable to recover it. 00:40:06.309 [2024-06-11 14:07:59.125306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.125319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.125610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.125622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.125768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.125780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.125948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.125960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.126273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.126286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.126575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.126587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.126797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.126809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.126977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.126990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.127228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.127240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.127508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.127520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.127695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.127708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.127950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.127962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.128206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.128218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.128526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.128710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.128722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.128973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.128985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.129189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.129201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.129420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.129436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.129660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.129673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.129991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.130004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.130334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.130347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.130659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.130672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.130848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.130861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.131009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.131021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.131248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.131261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.131552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.131565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.131858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.131871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.132100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.132113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.132268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.132281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.132423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.132435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.132635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.132648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.132857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.132870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.133047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.310 [2024-06-11 14:07:59.133059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.310 qpair failed and we were unable to recover it. 00:40:06.310 [2024-06-11 14:07:59.133356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.133370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.133592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.133604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.133814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.133828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.133964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.133976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.134228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.134241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.134455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.134468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.134763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.134777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.134991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.135116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.135128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.135423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.135436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.135577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.135590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.135752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.135764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.136032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.136045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.136320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.136332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.136625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.136637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.136876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.136889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.137044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.137056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.137260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.137273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.137499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.137512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.137676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.137689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.137897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.137910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.138088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.138251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.138263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.311 [2024-06-11 14:07:59.138543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.311 [2024-06-11 14:07:59.138556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.311 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.138795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.138808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.139009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.139022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.139269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.139536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.139549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.139708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.139719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.139891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.139904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.140063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.140075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.140363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.140376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.140597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.140610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.140898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.140910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.141056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.141069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.313 [2024-06-11 14:07:59.141230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.313 [2024-06-11 14:07:59.141243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.313 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.141411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.141424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.141576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.141589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.141721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.141934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.141947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.142095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.142107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.142273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.142286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.142431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.142446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.142588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.314 [2024-06-11 14:07:59.142600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.314 qpair failed and we were unable to recover it. 00:40:06.314 [2024-06-11 14:07:59.142826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.142839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.143943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.143955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.144117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.144130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.144284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.144297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.144566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.144578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.144795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.144807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.144967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.144980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.145119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.145131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.145351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.145363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.145509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.145522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.145716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.145728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.145929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.145941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.146092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.146105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.146257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.146269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.146485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.146497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.146717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.146729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.146885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.146898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.147983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.147995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.148262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.148275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.148430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.148442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.148641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.148655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.148889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.148902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.149069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.149219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.149457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.149630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.149780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.149999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.150012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.150133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.150147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.150361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.150373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.150580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.150592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.150809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.150821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.151961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.151974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.152966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.152978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.153174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.315 [2024-06-11 14:07:59.153186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.315 qpair failed and we were unable to recover it. 00:40:06.315 [2024-06-11 14:07:59.153401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.153414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.153610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.153623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.153773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.153785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.153985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.153997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.154195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.154208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.154413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.154425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.154666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.154679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.154940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.154953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.155898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.155910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.156127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.156139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.156284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.156296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.156404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.156416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.156647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.156660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.156762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.156774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.157088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.157100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.157304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.157317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.157458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.157470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.157693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.157706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.157852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.157866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.158014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.158026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.158294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.158306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.158439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.158451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.158653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.158666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.158797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.158809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.159020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.159032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.159250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.159262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.159528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.159540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.159765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.159778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.159936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.159948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.160166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.160178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.160391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.160403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.160563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.160576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.160710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.160722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.160948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.160960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.161171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.161183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.316 qpair failed and we were unable to recover it. 00:40:06.316 [2024-06-11 14:07:59.161400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.316 [2024-06-11 14:07:59.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.161554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.161566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.161843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.161855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.161989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.162001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.162165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.162178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.162331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.162343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.162557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.162570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.162702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.162714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.162992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.163005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.163143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.163155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.163424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.163436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.163644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.163656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.163794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.163806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.164060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.164307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.164461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.164629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.164844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.164990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.165002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.165272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.165285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.165497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.165510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.165800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.165813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.166030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.166042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.166176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.166190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.317 [2024-06-11 14:07:59.166359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.317 [2024-06-11 14:07:59.166372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.317 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.166589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.166601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.166798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.166810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.167030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.167042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.167198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.167422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.167433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.167690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.167702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.167903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.167915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.318 [2024-06-11 14:07:59.168056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.318 [2024-06-11 14:07:59.168069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.318 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.168340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.168352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.168499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.168511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.168646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.168658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.168865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.168879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.169023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.169036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.169249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.169261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.169490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.169503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.169651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.169664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.169864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.169876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.170075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.170087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.170244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.170256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.170490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.170502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.170769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.170781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.170998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.171011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.596 [2024-06-11 14:07:59.171167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.596 [2024-06-11 14:07:59.171179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.596 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.171395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.171407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.171553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.171565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.171780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.171793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.171991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.172003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.172227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.172239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.172475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.172491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.172723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.172735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.172932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.172944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.173144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.173156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.173307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.173319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.173522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.173535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.173678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.173690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.173904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.173917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.174087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.174100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.174308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.174320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.174475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.174503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.174733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.174746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.174894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.174906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.175171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.175183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.175500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.175512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.175682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.175695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.175983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.175995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.176224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.176237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.176448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.176460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.176618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.176630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.176848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.176860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.177072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.177084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.177347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.177359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.177594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.177606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.177703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.177716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.177929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.177942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.178139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.178151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.178365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.178377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.178572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.178585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.178716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.178729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.178883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.178896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.179034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.179047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.179269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.179281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.179495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.179508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.179680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.179692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.179890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.179902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.180110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.180123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.180344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.180356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.180577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.180589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.180726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.180739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.181978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.181990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.182256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.182268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.182424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.182436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.182642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.182654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.182796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.182808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.183887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.183899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.184187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.184199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.184428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.184440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.184585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.184597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.184759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.184771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.184921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.184933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.185147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.185159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.185294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.185306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.597 qpair failed and we were unable to recover it. 00:40:06.597 [2024-06-11 14:07:59.185600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.597 [2024-06-11 14:07:59.185612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.185825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.185837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.186051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.186062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.186203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.186215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.186416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.186429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.186601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.186614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.186829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.186841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.187107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.187119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.187292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.187304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.187434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.187446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.187643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.187656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.187861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.187873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.188090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.188102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.188308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.188320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.188631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.188645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.188843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.188855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.189145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.189158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.189312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.189325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.189457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.189469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.189746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.189758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.189972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.189985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.190219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.190231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.190433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.190445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.190712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.190725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.190974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.190987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.191136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.191148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.191290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.191302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.191521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.191533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.191791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.191804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.192046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.192058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.192301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.192313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.192486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.192498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.192657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.192670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.192875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.192886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.193100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.193112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.193327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.193339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.193480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.193493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.193646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.193659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.193880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.193892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.194040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.194052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.194196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.194209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.194417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.194429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.194644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.194657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.194963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.195123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.195136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.195297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.195308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.195534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.195863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.195875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.196975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.196987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.197142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.197156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.197370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.197382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.197515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.197528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.197737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.197749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.197960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.197973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.198115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.198127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.198276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.198289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.198492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.198504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.198720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.198732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.198896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.198908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.199140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.199153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.199317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.199329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.199496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.199509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.199715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.199728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.199928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.199940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.200139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.200151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.598 qpair failed and we were unable to recover it. 00:40:06.598 [2024-06-11 14:07:59.200363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.598 [2024-06-11 14:07:59.200375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.200642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.200655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.200868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.200880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.201082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.201094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.201294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.201306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.201515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.201527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.201738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.201751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.201960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.201972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.202172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.202184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.202380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.202392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.202665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.202677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.202824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.202836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.202987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.203150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.203395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.203632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.203778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.203942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.203954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.204267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.204280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.204425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.204437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.204584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.204596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.204797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.204809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.204947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.204960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.205113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.205125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.205358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.205373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.205520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.205533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.205702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.205714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.206005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.206017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.206152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.206163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.206360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.206373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.206599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.206611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.206766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.207043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.207055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.207195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.207207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.207436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.207449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.207624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.207637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.207843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.207855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.208025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.208242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.208465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.208677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.208798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.208998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.209010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.209147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.209159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.209437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.209449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.209605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.209617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.209889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.209901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.210119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.210131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.210342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.210354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.210496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.210508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.210813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.210825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.210964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.210976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.211122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.211134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.211288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.211300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.211449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.211461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.211662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.211675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.211891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.211903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.212170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.212181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.212403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.212417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.212615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.212628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.599 qpair failed and we were unable to recover it. 00:40:06.599 [2024-06-11 14:07:59.212769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.599 [2024-06-11 14:07:59.212781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.213028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.213040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.213245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.213258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.213469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.213485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.213728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.213743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.213883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.213895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.214965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.214977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.215243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.215255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.215455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.215467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.215618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.215630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.215787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.215799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.216018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.216030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.216242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.216254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.216491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.216504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.216651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.216663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.216807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.216819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.217085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.217097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.217252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.217265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.217359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.217371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.217569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.217582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.217774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.217786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.218055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.218067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.218334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.218347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.218502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.218516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.218726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.218739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.218873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.218886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.219977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.219989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.220190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.220202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.220339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.220351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.220565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.220577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.220800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.220813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.221910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.221923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.222140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.222152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.222293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.222305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.222537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.222550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.222760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.222773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.222879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.222892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.223126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.223139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.223276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.223288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.223500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.223513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.223711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.223724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.223934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.223948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.224076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.224089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.224262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.224275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.224450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.224462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.224678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.224691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.224834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.224846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.225842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.225854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.226002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.226014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.226219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.226231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.600 [2024-06-11 14:07:59.226367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.600 [2024-06-11 14:07:59.226380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.600 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.226539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.226552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.226712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.226725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.226883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.226896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.227047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.227060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.227194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.227206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.227408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.227420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.227563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.227576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.227846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.227858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.228970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.228982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.229126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.229300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.229312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.229532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.229545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.229753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.229765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.229973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.229985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.230139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.230151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.230385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.230397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.230589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.230602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.230801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.230813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.230967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.230979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.231196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.231209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.231366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.231379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.231650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.231663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.231765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.231778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.231920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.231933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.232936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.232949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.233891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.233903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.234169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.234181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.234332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.234345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.234480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.234492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.234701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.234713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.234862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.234874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.235034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.235047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.235182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.235194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.235407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.235420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.235562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.235575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.235843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.235856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.236000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.236011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.236208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.236221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.236436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.236448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.236656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.236669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.236868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.236881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.237961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.237972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.601 qpair failed and we were unable to recover it. 00:40:06.601 [2024-06-11 14:07:59.238108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.601 [2024-06-11 14:07:59.238120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.238373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.238385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.238586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.238599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.238703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.238715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.239014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.239028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.239293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.239306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.239572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.239585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.239729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.239742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.239955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.239968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.240168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.240180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.240338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.240349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.240583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.240596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.240802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.240814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.240973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.240984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.241945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.241957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.242156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.242168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.242303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.242315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.242471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.242487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.242745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.242757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.242959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.242971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.243179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.243191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.243338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.243350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.243548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.243561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.243772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.243784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.244867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.244881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.245012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.245024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.245236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.245248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.245402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.245414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.245621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.245633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.245855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.245867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.246139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.246152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.246376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.246389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.246547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.246559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.246698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.246710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.246931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.246943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.247154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.247166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.247364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.247376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.247619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.247631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.247898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.247911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.248111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.248123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.248322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.248335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.248541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.248554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.248757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.248771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.248902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.248914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.249057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.249069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.249278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.249290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.249504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.249517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.249672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.249685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.249883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.249895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.250135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.250147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.250416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.250429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.250580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.250592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.250726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.250738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.250947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.250959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.251107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.251119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.251342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.251354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.251624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.251637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.251940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.251953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.252234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.252247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.602 [2024-06-11 14:07:59.252388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.602 [2024-06-11 14:07:59.252401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.602 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.252537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.252552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.252795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.252808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.252946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.252959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.253244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.253256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.253401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.253413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.253679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.253691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.253995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.254008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.254233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.254246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.254396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.254408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.254578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.254591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.254886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.254899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.255970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.255981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.256219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.256232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.256385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.256398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.256565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.256578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.256788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.256801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.257005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.257018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.257217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.257229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.257432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.257444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.257653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.257666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.257867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.257880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.258088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.258101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.258321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.258333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.258470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.258486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.258640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.258652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.258857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.258870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.259947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.259959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.260109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.260121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.260258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.260270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.260488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.260503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.260642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.260654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.260859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.260872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.261971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.261983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.262136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.262149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.262285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.262297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.262432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.262444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.262647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.262659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.262862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.262874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.263105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.263118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.263228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.263240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.263457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.263469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.263642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.263656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.263873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.263886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.264109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.264121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.264325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.264338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.264484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.264496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.264784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.264797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.265932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.603 [2024-06-11 14:07:59.265945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.603 qpair failed and we were unable to recover it. 00:40:06.603 [2024-06-11 14:07:59.266106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.266343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.266355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.266593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.266606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.266873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.266886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.267153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.267165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.267313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.267326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.267543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.267556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.267701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.267714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.267849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.267861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.268967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.268980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.269201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.269213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.269352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.269364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.269607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.269620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.269794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.269806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.269966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.269978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.270182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.270193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.270349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.270361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.270581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.270593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.270797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.270809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.271021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.271034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.271168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.271181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.271338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.271351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.271567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.271579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.271724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.271737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.272052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.272065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.272239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.272251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.272465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.272488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.272670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.272683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.272840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.272853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.273071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.273082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.273249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.273261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.273483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.273496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.273652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.273665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.273867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.273879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.274026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.274249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.274394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.274634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.274792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.274999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.275012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.275276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.275289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.275437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.275450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.275617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.275630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.275898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.275910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.276112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.276125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.276334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.276346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.276599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.276613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.276755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.276768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.277004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.277017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.277234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.277246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.277464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.277479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.277768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.277781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.277936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.277949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.278114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.278127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.278256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.278269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.278435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.278447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.278658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.278671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.278833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.604 [2024-06-11 14:07:59.278845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.604 qpair failed and we were unable to recover it. 00:40:06.604 [2024-06-11 14:07:59.279132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.279398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.279628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.279641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.279788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.279800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.279950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.279962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.280132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.280144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.280421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.280433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.280633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.280645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.280861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.280874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.281087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.281100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.281370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.281383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.281585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.281598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.281802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.281814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.282106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.282118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.282277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.282289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.282491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.282504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.282715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.282727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.282867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.282880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.283176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.283189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.283406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.283418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.283636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.283649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.283798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.283811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.284030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.284042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.284262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.284436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.284448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.284656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.284669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.284883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.284895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.285045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.285057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.285263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.285275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.285547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.285561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.285854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.285867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.286009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.286021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.286332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.286344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.286512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.286525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.286727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.286740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.286899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.286911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.287134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.287146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.287286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.287299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.287444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.287457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.287616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.287628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.287830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.287843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.288109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.288122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.288341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.288354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.288494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.288507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.288939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.288951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.289239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.289253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.289463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.289480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.289695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.289707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.289997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.290009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.290150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.290162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.290400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.290412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.290735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.290748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.290947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.290960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.291118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.291131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.291283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.291297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.291519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.291531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.291685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.291698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.291833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.291845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.292060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.292072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.292318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.292330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.292536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.292549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.292751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.292766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.293049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.293061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.293207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.293219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.293423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.293435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.293648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.293660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.293886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.293898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.294051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.294063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.294283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.605 [2024-06-11 14:07:59.294502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.605 [2024-06-11 14:07:59.294514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.605 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.294713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.294725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.294936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.294948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.295147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.295160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.295366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.295378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.295646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.295659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.295936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.295949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.296151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.296163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.296389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.296402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.296623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.296636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.296772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.296784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.297096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.297108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.297310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.297323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.297595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.297608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.297807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.297819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.298108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.298120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.298360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.298373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.298516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.298528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.298680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.298692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.298890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.298902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.299103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.299346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.299358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.299578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.299823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.299836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.300067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.300080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.300226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.300240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.300439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.300451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.300693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.300706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.301023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.301035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.301164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.301176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.301442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.301454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.301609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.301622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.301898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.301911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.302126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.302138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.302426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.302438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.302580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.302592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.302860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.302873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.303070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.303082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.303348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.303360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.303495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.303507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.303743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.303756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.303957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.303970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.304249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.304261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.304406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.304419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.304553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.304565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.304784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.304796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.305068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.305080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.305219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.305231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.305389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.305401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.305609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.305622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.305776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.305788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.306100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.306112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.306335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.306347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.306618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.306631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.306870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.306882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.307152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.307164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.307318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.307330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.307622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.307635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.307843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.307855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.308146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.308360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.308372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.308506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.308518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.308739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.308751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.606 [2024-06-11 14:07:59.308961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.606 [2024-06-11 14:07:59.308973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.606 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.309220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.309233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.309398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.309413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.309571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.309584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.309870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.309882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.310197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.310209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.310475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.310492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.310691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.310703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.310937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.310949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.311150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.311163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.311363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.311375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.311531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.311544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.311770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.311782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.312937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.312949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.313237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.313249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.313452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.313464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.313697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.313709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.314963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.314976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.315121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.315133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.315357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.315369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.315528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.315541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.315762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.315774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.316038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.316051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.316262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.316407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.316420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.316708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.316721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.316928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.316940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.317140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.317153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.317367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.317379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.317667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.317679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.317944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.317956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.318106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.318119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.318279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.318296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.318514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.318526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.318817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.318830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.319100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.319112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.319383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.319394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.319614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.319626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.319846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.319863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.320084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.320096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.320309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.320321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.320472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.320495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.320761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.320773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.320908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.320920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.321069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.321080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.321344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.321356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.321623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.321635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.321846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.321859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.322019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.322030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.322268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.322425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.322437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.322726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.322739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.322887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.322900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.323113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.323125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.323437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.323449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.323604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.323616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.323771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.323783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.324000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.324011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.324181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.324193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.324453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.324668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.607 [2024-06-11 14:07:59.324680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.607 qpair failed and we were unable to recover it. 00:40:06.607 [2024-06-11 14:07:59.324916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.324927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.325188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.325200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.325466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.325483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.325682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.325694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.325906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.325918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.326193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.326205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.326419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.326432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.326564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.326577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.326847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.326860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.327125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.327137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.327430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.327442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.327730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.327744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.328044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.328057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.328278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.328290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.328556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.328569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.328767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.328778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.328986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.328998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.329228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.329240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.329532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.329545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.329767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.329779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.329998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.330010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.330252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.330264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.330399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.330412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.330709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.330721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.330921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.330933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.331143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.331156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.331307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.331320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.331611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.331623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.331961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.331973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.332289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.332301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.332514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.332526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.332737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.332749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.332959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.332971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.333235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.333247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.333550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.333563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.333859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.333871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.334085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.334097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.334394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.334406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.334604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.334617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.334839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.334851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.335053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.335065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.335348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.335360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.335651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.335663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.335951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.335963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.336269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.336281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.336566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.336579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.336821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.336833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.337125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.337137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.337369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.337380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.337693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.337705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.338005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.338017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.338279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.338293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.338580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.338592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.338878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.338890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.339202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.339215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.339441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.339453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.339768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.339780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.340026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.340038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.340261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.340274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.340594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.340606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.340871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.340883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.341189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.341201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.341520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.341532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.341730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.341962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.341974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.342244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.342256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.342494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.342506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.342782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.342794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.343072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.343084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.343278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.608 [2024-06-11 14:07:59.343290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.608 qpair failed and we were unable to recover it. 00:40:06.608 [2024-06-11 14:07:59.343588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.343600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.343871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.343883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.344155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.344167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.344473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.344493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.344646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.344659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.344948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.344960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.345123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.345135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.345329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.345341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.345610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.345623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.345920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.345932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.346220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.346232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.346542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.346554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.346825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.346837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.347121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.347133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.347422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.347434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.347650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.347663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.347964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.347977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.348136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.348148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.348462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.348474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.348768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.349007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.349020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.349247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.349261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.349551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.349564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.349767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.349779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.350066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.350078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.350289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.350300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.350584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.350597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.350882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.351055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.351067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.351357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.351369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.351682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.351695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.351962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.351974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.352140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.352152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.352441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.352453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.352688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.352701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:06.609 [2024-06-11 14:07:59.352973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.352986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:06.609 [2024-06-11 14:07:59.353272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.353285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:06.609 [2024-06-11 14:07:59.353551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.353564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:06.609 [2024-06-11 14:07:59.353847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.609 [2024-06-11 14:07:59.354072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.354085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.354300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.354313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.354523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.354535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.354821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.355120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.355132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.355419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.355432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.355675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.355688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.355969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.355982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.356275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.356288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.356598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.356611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.356942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.356954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.357162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.357174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.357383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.357396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.357561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.357574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.357814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.357827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.357988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.358000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.358218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.358488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.358502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.358780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.358793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.359037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.359050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.359328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.359343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.359566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.359579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.609 qpair failed and we were unable to recover it. 00:40:06.609 [2024-06-11 14:07:59.359851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.609 [2024-06-11 14:07:59.359864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.360023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.360035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.360258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.360270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.360546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.360559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.360782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.360794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.361028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.361039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.361264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.361276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.361482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.361495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.361762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.361774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.362057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.362070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.362335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.362348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.362653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.362665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.362955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.362968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.363251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.363265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.363555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.363568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.363774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.363787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.364012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.364024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.364200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.364213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.364422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.364434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.364726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.364739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.365026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.365038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.365334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.365346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.365666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.365678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.365921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.365933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.366207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.366219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.366441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.366454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.366639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.366651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.366862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.366875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.366962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.366974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.367241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.367255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.367591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.367604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.367922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.367935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.368101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.368113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.368333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.368346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.368598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.368612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.368859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.368871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.369014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.369028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.369251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.369263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.369554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.369570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.369786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.369799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.370043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.370056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.370336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.370348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.370613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.370625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.370914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.370926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.371099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.371112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.371329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.371341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.371628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.371641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.371929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.371942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.372113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.372125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.372438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.372450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.372713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.372726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.373000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.373013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.373217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.373229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.373546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.373559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.373871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.373883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.374121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.374134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.374421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.374433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.374725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.374737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.374900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.374912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.375116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.375128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.375430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.375442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.375722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.375734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.376006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.376183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.376196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.376491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.376504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.376748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.376761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.376894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.376906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.377063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.377076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.377364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.377376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.377614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.610 [2024-06-11 14:07:59.377627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.610 qpair failed and we were unable to recover it. 00:40:06.610 [2024-06-11 14:07:59.377846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.377860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.378033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.378045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.378311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.378323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.378614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.378626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.378837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.378849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.379154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.379166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.379454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.379466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.379767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.379779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.380052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.380067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.380223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.380235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.380503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.380517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.380745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.380757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.381042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.381373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.381385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.381670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.381683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.381835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.381847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.382113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.382125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.382421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.382433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.382608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.382620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.382854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.382866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.383073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.383086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.383372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.383385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.383678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.383690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.383853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.383865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.384078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.384092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.384321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.384333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.384579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.384592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.384688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.384700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.384940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.384952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.385252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.385266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.385491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.385503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.385750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.385762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.385913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.385926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.386239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.386251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.386471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.386487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.386690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.386704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.386873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.386885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.387043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.387055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.387295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.387307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.387522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.387535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.387735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.387748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.387922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.387934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.388200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.388213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.388508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.388521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.388736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.388748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.389059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.389072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.389347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.389359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.389642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.389655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.389900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.389912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.390142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.390154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.390330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.390343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.390619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.390631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.390888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.390901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.391059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.391071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.391299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.391311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.391523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.391535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.391732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.391744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.391903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.391916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.392092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.392104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.392304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.392316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.392591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.392605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.392834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.392847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.393018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.393031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.393264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.611 [2024-06-11 14:07:59.393276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.611 qpair failed and we were unable to recover it. 00:40:06.611 [2024-06-11 14:07:59.393592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.393604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.393803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.393816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.394087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.394100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.394408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.394420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.394664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.394676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.394839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.394851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.395013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.395025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.395242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.395254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.395559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.395571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.395842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.395854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.396121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.396133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.396409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.396422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.396696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.396709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.396933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.396946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.397232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.397244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.397406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.397418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.397639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.397652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.397894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.397906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.398107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.398119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.398331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.398343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:06.612 [2024-06-11 14:07:59.398552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.398566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.398838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.398852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:06.612 [2024-06-11 14:07:59.399068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.399081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.399298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.399314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.612 [2024-06-11 14:07:59.399526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.399540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.612 [2024-06-11 14:07:59.399841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.399856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.400068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.400080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.400276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.400288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.400583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.400596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.400810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.400822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.400988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.401000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.401198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.401210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.401505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.401517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.401803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.401816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.402105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.402117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.402428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.402440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.402683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.402695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.402964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.402976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.403271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.403283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.403577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.403590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.403889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.403902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.404058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.404070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.404355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.404368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.404648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.404661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.404951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.404964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.405238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.405250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.405454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.405467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.405773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.405787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.405987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.406000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.406300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.406315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.406595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.406608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.406912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.406924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.407093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.407105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.407417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.407431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.407716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.407729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.408022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.408297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.408309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.408537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.408550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.408751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.408764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.408986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.408998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.409266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.409278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.409427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.409440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.409654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.409667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.409939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.409952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.410246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.410259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.410550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.410564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.410716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.410729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.411022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.411035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.612 [2024-06-11 14:07:59.411337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.612 [2024-06-11 14:07:59.411349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.612 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.411550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.411563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.411804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.412109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.412122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.412366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.412378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.412657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.412670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.412836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.412848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.413023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.413035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.413339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.413353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.413577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.413590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.413879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.413892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.414168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.414182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.414482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.414496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.414665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.414677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.414910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.414922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.415140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.415152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.415382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.415394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.415644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.415657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.415871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.415883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.416057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.416069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.416350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.416363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.416577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.416592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.416888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.416901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.417104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.417397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.417410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.417653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.417665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 Malloc0 00:40:06.613 [2024-06-11 14:07:59.417884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.417898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.418199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.418211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.418474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.418490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.613 [2024-06-11 14:07:59.418728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.418742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:06.613 [2024-06-11 14:07:59.418987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.419001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.419175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.419187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.613 [2024-06-11 14:07:59.419485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.419499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.613 [2024-06-11 14:07:59.419702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.419716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.419883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.419896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.420172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.420185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.420474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.420496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.420735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.420748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.420880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.420892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.421160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.421172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.421339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.421351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.421559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.421573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.421857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.421869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.422184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.422196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.422442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.422454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.422778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.422791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.423058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.423074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.423279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.423292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.423581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.423594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.423815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.423827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.424028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.424041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.424340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.424352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.424551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.424564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.424858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.424871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.425165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.425177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.425397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.425409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.425454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:06.613 [2024-06-11 14:07:59.425632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.425645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.425803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.425815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.426088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.426099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.426316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.426329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.426619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.426631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.426904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.426917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.427127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.427139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.427342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.427354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.427660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.427874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.428095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.428107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.428372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.428384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.613 [2024-06-11 14:07:59.428583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.613 [2024-06-11 14:07:59.428595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.613 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.428890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.428902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.429061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.429072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.429380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.429392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.429612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.429625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.429845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.429858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.430060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.430072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.430375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.430386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.430645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.430657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.430877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.430889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.431050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.431062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.431273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.431285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.431440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.431452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.431698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.431710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.431978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.431990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.432310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.432322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.432621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.432633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.432933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.432946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.433220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.433232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.433468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.433485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.433755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.433767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.434035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.434048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.434341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.434354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.614 [2024-06-11 14:07:59.434574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.434602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:06.614 [2024-06-11 14:07:59.434872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.434885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.614 [2024-06-11 14:07:59.435166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.435179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.614 [2024-06-11 14:07:59.435468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.435484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.435695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.435707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.435980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.435992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.436260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.436272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.436561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.436574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.436787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.436799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.437034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.437046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.437283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.437295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.437610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.437622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.437916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.437928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.438193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.438205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.438407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.438419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.438729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.438741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.439032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.439044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.439284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.439296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.439565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.439578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.439866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.439878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.440169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.440181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.440486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.440498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.440787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.440799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.440961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.440973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.441199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.441211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.441428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.441440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.441654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.441666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.441893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.441905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.442122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.442134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.442298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.442310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.442606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.442619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.442834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.442847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.443116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.443128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.443393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.443407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.443609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.443621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.443922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.443934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.444215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.444227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.444513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.444525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.444810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.444823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.445047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.614 [2024-06-11 14:07:59.445059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.614 qpair failed and we were unable to recover it. 00:40:06.614 [2024-06-11 14:07:59.445276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.445289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.445575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.445587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.445754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.445766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.445977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.445988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.615 [2024-06-11 14:07:59.446275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.446288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.446576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.446588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.446856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.615 [2024-06-11 14:07:59.446897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.615 [2024-06-11 14:07:59.447168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.447190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.447444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.447464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.447787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.447808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.448042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.448351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.448363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.448633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.448646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.448912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.448925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.449143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.449155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.449422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.449434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.449648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.449661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.449877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.449890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.450106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.450121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.450386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.450399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.450690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.450703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.450952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.450965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.451234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.451246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.451553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.451875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.451887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.452177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.452189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.452479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.452491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.452799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.452811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.453049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.453061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.453374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.453387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.453656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.453668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.453883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.453896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.454137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.454150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.615 [2024-06-11 14:07:59.454482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.454496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:06.615 [2024-06-11 14:07:59.454785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.454798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.615 [2024-06-11 14:07:59.455083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.615 [2024-06-11 14:07:59.455382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.455395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.455639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.455652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.455941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.455953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.456269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.456281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.456569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.456581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.456871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.456883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.457052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.457064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.457373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.457699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:06.615 [2024-06-11 14:07:59.457712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.457749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:06.615 [2024-06-11 14:07:59.466163] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.615 [2024-06-11 14:07:59.466266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.615 [2024-06-11 14:07:59.466287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.615 [2024-06-11 14:07:59.466298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.615 [2024-06-11 14:07:59.466309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.615 [2024-06-11 14:07:59.466335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:06.615 14:07:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1676660 00:40:06.615 [2024-06-11 14:07:59.476091] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.615 [2024-06-11 14:07:59.476183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.615 [2024-06-11 14:07:59.476202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.615 [2024-06-11 14:07:59.476213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.615 [2024-06-11 14:07:59.476223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.615 [2024-06-11 14:07:59.476242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.615 [2024-06-11 14:07:59.486152] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.615 [2024-06-11 14:07:59.486238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.615 [2024-06-11 14:07:59.486257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.615 [2024-06-11 14:07:59.486267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.615 [2024-06-11 14:07:59.486276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.615 [2024-06-11 14:07:59.486296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.615 qpair failed and we were unable to recover it. 00:40:06.876 [2024-06-11 14:07:59.495989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.876 [2024-06-11 14:07:59.496077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.876 [2024-06-11 14:07:59.496095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.876 [2024-06-11 14:07:59.496105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.876 [2024-06-11 14:07:59.496114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.876 [2024-06-11 14:07:59.496133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.876 qpair failed and we were unable to recover it. 00:40:06.876 [2024-06-11 14:07:59.506088] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.506243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.506261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.506271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.506280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.506299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.516102] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.516180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.516198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.516208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.516217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.516235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.526179] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.526259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.526277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.526287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.526296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.526315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.536117] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.536222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.536242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.536252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.536261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.536280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.546159] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.546248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.546266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.546275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.546284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.546303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.556185] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.556263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.556281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.556290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.556299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.556317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.566216] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.566303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.566321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.566331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.566340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.566358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.576233] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.576323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.576341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.576351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.576360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.576380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.586255] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.586338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.586356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.586366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.586375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.586393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.596338] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.596462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.596485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.596495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.596504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.596522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.606301] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.606380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.606397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.606407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.606416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.606434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.616349] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.616436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.616455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.616465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.616474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.616496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.626418] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.877 [2024-06-11 14:07:59.626509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.877 [2024-06-11 14:07:59.626532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.877 [2024-06-11 14:07:59.626542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.877 [2024-06-11 14:07:59.626551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.877 [2024-06-11 14:07:59.626569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.877 qpair failed and we were unable to recover it. 00:40:06.877 [2024-06-11 14:07:59.636556] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.636638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.636657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.636667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.636676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.636694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.646635] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.646728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.646745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.646755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.646764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.646782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.656559] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.656643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.656661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.656670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.656679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.656697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.666711] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.666797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.666815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.666825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.666836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.666855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.676727] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.676875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.676893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.676903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.676912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.676930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.686601] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.686684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.686702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.686712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.686721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.686739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.696594] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.696673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.696690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.696700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.696708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.696726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.706743] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.706830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.706847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.706857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.706866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.706884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.716713] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.716821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.716839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.716848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.716857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.716875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.726710] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.726792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.726810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.726820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.726829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.726846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.736713] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.736799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.736817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.736827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.736836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.736855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.746793] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.746895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.746913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.746923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.746932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.746950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.756817] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.756899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.756917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.756927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.756941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.878 [2024-06-11 14:07:59.756959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.878 qpair failed and we were unable to recover it. 00:40:06.878 [2024-06-11 14:07:59.766873] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.878 [2024-06-11 14:07:59.766953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.878 [2024-06-11 14:07:59.766971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.878 [2024-06-11 14:07:59.766980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.878 [2024-06-11 14:07:59.766989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.879 [2024-06-11 14:07:59.767007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.879 qpair failed and we were unable to recover it. 00:40:06.879 [2024-06-11 14:07:59.776811] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:06.879 [2024-06-11 14:07:59.776892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:06.879 [2024-06-11 14:07:59.776909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:06.879 [2024-06-11 14:07:59.776919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:06.879 [2024-06-11 14:07:59.776928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:06.879 [2024-06-11 14:07:59.776945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:06.879 qpair failed and we were unable to recover it. 00:40:07.139 [2024-06-11 14:07:59.786913] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.139 [2024-06-11 14:07:59.786994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.139 [2024-06-11 14:07:59.787012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.139 [2024-06-11 14:07:59.787022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.139 [2024-06-11 14:07:59.787030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.139 [2024-06-11 14:07:59.787049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.139 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.796890] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.796968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.796986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.796995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.797004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.797022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.806930] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.807011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.807029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.807038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.807047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.807064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.816938] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.817018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.817036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.817045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.817054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.817072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.826971] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.827053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.827070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.827080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.827089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.827107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.836997] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.837123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.837140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.837149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.837158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.837176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.847026] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.847103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.847120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.847133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.847142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.847159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.857066] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.857163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.857179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.857188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.857197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.857215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.867077] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.867161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.867179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.867188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.867197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.867215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.877101] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.877183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.877201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.877211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.877220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.877238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.887166] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.887276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.887294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.887304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.887313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.887331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.897196] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.897280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.897297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.897307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.897315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.897334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.140 [2024-06-11 14:07:59.907133] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.140 [2024-06-11 14:07:59.907215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.140 [2024-06-11 14:07:59.907232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.140 [2024-06-11 14:07:59.907242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.140 [2024-06-11 14:07:59.907251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.140 [2024-06-11 14:07:59.907268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.140 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.917237] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.917319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.917336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.917345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.917354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.917372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.927300] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.927373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.927391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.927400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.927409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.927432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.937282] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.937364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.937385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.937394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.937403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.937421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.947329] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.947415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.947432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.947442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.947451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.947468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.957358] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.957443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.957461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.957471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.957484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.957501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.967409] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.967519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.967537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.967547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.967555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.967574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.977409] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.977496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.977513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.977523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.977532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.977553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.987428] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.987574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.987591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.987601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.987610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.987629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:07:59.997473] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:07:59.997593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:07:59.997610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:07:59.997620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:07:59.997629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:07:59.997647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:08:00.007603] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:08:00.007737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:08:00.007755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:08:00.007764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:08:00.007773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:08:00.007791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:08:00.017632] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:08:00.017722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:08:00.017742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:08:00.017752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:08:00.017760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:08:00.017780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.141 [2024-06-11 14:08:00.027566] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.141 [2024-06-11 14:08:00.027652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.141 [2024-06-11 14:08:00.027674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.141 [2024-06-11 14:08:00.027684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.141 [2024-06-11 14:08:00.027694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.141 [2024-06-11 14:08:00.027713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.141 qpair failed and we were unable to recover it. 00:40:07.142 [2024-06-11 14:08:00.037605] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.142 [2024-06-11 14:08:00.037688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.142 [2024-06-11 14:08:00.037706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.142 [2024-06-11 14:08:00.037716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.142 [2024-06-11 14:08:00.037724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.142 [2024-06-11 14:08:00.037743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.142 qpair failed and we were unable to recover it. 00:40:07.402 [2024-06-11 14:08:00.047644] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.402 [2024-06-11 14:08:00.047726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.402 [2024-06-11 14:08:00.047745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.402 [2024-06-11 14:08:00.047754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.402 [2024-06-11 14:08:00.047763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.402 [2024-06-11 14:08:00.047781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.402 qpair failed and we were unable to recover it. 00:40:07.402 [2024-06-11 14:08:00.057674] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.402 [2024-06-11 14:08:00.057790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.402 [2024-06-11 14:08:00.057807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.402 [2024-06-11 14:08:00.057817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.402 [2024-06-11 14:08:00.057826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.402 [2024-06-11 14:08:00.057844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.402 qpair failed and we were unable to recover it. 00:40:07.402 [2024-06-11 14:08:00.067642] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.402 [2024-06-11 14:08:00.067725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.402 [2024-06-11 14:08:00.067744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.402 [2024-06-11 14:08:00.067755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.402 [2024-06-11 14:08:00.067764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.402 [2024-06-11 14:08:00.067787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.402 qpair failed and we were unable to recover it. 00:40:07.402 [2024-06-11 14:08:00.077714] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.402 [2024-06-11 14:08:00.077823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.402 [2024-06-11 14:08:00.077842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.077851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.077860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.403 [2024-06-11 14:08:00.077879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.087795] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.087899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.087918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.087928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.087937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.403 [2024-06-11 14:08:00.087955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.097692] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.097783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.097801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.097810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.097819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.403 [2024-06-11 14:08:00.097838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.107743] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.107830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.107848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.107858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.107867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.403 [2024-06-11 14:08:00.107886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.117810] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.117902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.117921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.117930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.117939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:40:07.403 [2024-06-11 14:08:00.117957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.127946] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.128142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.128208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.128245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.128277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.128338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.137917] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.138053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.138091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.138114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.138135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.138172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.147923] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.148085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.148111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.148127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.148141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.148167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.157900] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.158000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.158023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.158036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.158052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.158075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.167969] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.168123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.168146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.168161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.168173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.168196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.178004] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.178105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.178128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.178142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.178154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.178176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.188022] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.188121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.188143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.188157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.188169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.188191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.198119] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.198213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.198236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.198249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.198261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.403 [2024-06-11 14:08:00.198284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.403 qpair failed and we were unable to recover it. 00:40:07.403 [2024-06-11 14:08:00.208188] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.403 [2024-06-11 14:08:00.208336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.403 [2024-06-11 14:08:00.208360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.403 [2024-06-11 14:08:00.208375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.403 [2024-06-11 14:08:00.208389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.208413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.218096] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.218195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.218218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.218232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.218245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.218268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.228057] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.228203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.228226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.228240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.228252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.228276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.238126] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.238225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.238247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.238262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.238275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.238297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.248156] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.248252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.248276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.248289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.248306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.248329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.258157] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.258257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.258280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.258294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.258307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.258329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.268217] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.268317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.268340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.268354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.268367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.268389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.278228] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.278326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.278349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.278363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.278377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.278400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.288298] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.288396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.288419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.288433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.288446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.288469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.298362] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.298461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.298488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.298503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.298516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.298541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.404 [2024-06-11 14:08:00.308300] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.404 [2024-06-11 14:08:00.308452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.404 [2024-06-11 14:08:00.308481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.404 [2024-06-11 14:08:00.308496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.404 [2024-06-11 14:08:00.308509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.404 [2024-06-11 14:08:00.308532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.404 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.318406] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.318507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.318530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.318544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.318557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.318579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.328444] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.328602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.328625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.328638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.328651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.328673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.338499] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.338597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.338620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.338638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.338651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.338673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.348502] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.348666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.348689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.348704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.348717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.348740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.358521] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.358620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.358642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.358656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.358668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.358691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.368582] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.368688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.368711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.368726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.368738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.368761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.378588] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.378690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.378713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.378727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.378740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.378762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.388669] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.388771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.388796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.388809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.388821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.388844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.398546] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.398641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.398664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.665 [2024-06-11 14:08:00.398678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.665 [2024-06-11 14:08:00.398691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.665 [2024-06-11 14:08:00.398714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.665 qpair failed and we were unable to recover it. 00:40:07.665 [2024-06-11 14:08:00.408642] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.665 [2024-06-11 14:08:00.408760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.665 [2024-06-11 14:08:00.408783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.408798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.408811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.408833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.418687] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.418866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.418889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.418903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.418916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.418939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.428633] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.428731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.428754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.428772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.428785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.428806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.438657] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.438751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.438774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.438788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.438800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.438822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.448688] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.448847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.448870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.448884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.448897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.448919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.458745] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.458842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.458864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.458877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.458889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.458911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.468796] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.468909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.468932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.468947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.468960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.468982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.478762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.478857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.478880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.478895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.478908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.478930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.488877] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.488978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.489001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.489015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.489028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.489049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.498927] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.499080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.499102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.499116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.499129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.499152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.508858] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.508956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.508979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.508994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.509007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.509029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.518967] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.519061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.519083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.519104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.519117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.519141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.528922] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.529028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.529052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.529066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.529079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.529101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.666 qpair failed and we were unable to recover it. 00:40:07.666 [2024-06-11 14:08:00.539017] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.666 [2024-06-11 14:08:00.539114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.666 [2024-06-11 14:08:00.539137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.666 [2024-06-11 14:08:00.539150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.666 [2024-06-11 14:08:00.539163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.666 [2024-06-11 14:08:00.539185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.667 qpair failed and we were unable to recover it. 00:40:07.667 [2024-06-11 14:08:00.549034] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.667 [2024-06-11 14:08:00.549131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.667 [2024-06-11 14:08:00.549154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.667 [2024-06-11 14:08:00.549169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.667 [2024-06-11 14:08:00.549181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.667 [2024-06-11 14:08:00.549204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.667 qpair failed and we were unable to recover it. 00:40:07.667 [2024-06-11 14:08:00.559093] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.667 [2024-06-11 14:08:00.559211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.667 [2024-06-11 14:08:00.559234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.667 [2024-06-11 14:08:00.559247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.667 [2024-06-11 14:08:00.559260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.667 [2024-06-11 14:08:00.559282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.667 qpair failed and we were unable to recover it. 00:40:07.667 [2024-06-11 14:08:00.569047] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.667 [2024-06-11 14:08:00.569239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.667 [2024-06-11 14:08:00.569261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.667 [2024-06-11 14:08:00.569275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.667 [2024-06-11 14:08:00.569289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.667 [2024-06-11 14:08:00.569312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.667 qpair failed and we were unable to recover it. 00:40:07.927 [2024-06-11 14:08:00.579152] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.927 [2024-06-11 14:08:00.579248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.927 [2024-06-11 14:08:00.579272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.927 [2024-06-11 14:08:00.579286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.927 [2024-06-11 14:08:00.579299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.927 [2024-06-11 14:08:00.579321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.927 qpair failed and we were unable to recover it. 00:40:07.927 [2024-06-11 14:08:00.589181] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.927 [2024-06-11 14:08:00.589279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.927 [2024-06-11 14:08:00.589302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.927 [2024-06-11 14:08:00.589316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.927 [2024-06-11 14:08:00.589329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.927 [2024-06-11 14:08:00.589351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.927 qpair failed and we were unable to recover it. 00:40:07.927 [2024-06-11 14:08:00.599229] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.927 [2024-06-11 14:08:00.599397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.927 [2024-06-11 14:08:00.599420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.927 [2024-06-11 14:08:00.599435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.927 [2024-06-11 14:08:00.599448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.927 [2024-06-11 14:08:00.599471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.927 qpair failed and we were unable to recover it. 00:40:07.927 [2024-06-11 14:08:00.609261] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.927 [2024-06-11 14:08:00.609357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.927 [2024-06-11 14:08:00.609380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.927 [2024-06-11 14:08:00.609397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.927 [2024-06-11 14:08:00.609410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.927 [2024-06-11 14:08:00.609432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.927 qpair failed and we were unable to recover it. 00:40:07.927 [2024-06-11 14:08:00.619295] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.927 [2024-06-11 14:08:00.619390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.927 [2024-06-11 14:08:00.619413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.927 [2024-06-11 14:08:00.619427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.927 [2024-06-11 14:08:00.619440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.619462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.629285] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.629374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.629397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.629410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.629423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.629446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.639244] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.639382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.639405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.639419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.639432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.639454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.649283] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.649378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.649400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.649415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.649428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.649450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.659427] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.659533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.659557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.659571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.659584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.659606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.669348] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.669453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.669482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.669497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.669510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.669533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.679434] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.679533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.679557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.679570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.679583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.679606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.689469] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.689567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.689590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.689604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.689617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.689639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.699435] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.699538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.699564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.699578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.699591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.699613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.709556] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.709654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.709678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.709693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.709706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.709728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.719498] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.719594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.719616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.719630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.719643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.719665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.729534] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.729655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.729679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.729693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.729706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.729728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.739551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.739681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.739704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.739718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.739731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.739757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.928 qpair failed and we were unable to recover it. 00:40:07.928 [2024-06-11 14:08:00.749576] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.928 [2024-06-11 14:08:00.749673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.928 [2024-06-11 14:08:00.749695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.928 [2024-06-11 14:08:00.749710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.928 [2024-06-11 14:08:00.749722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.928 [2024-06-11 14:08:00.749744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.759685] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.759777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.759800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.759813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.759825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.759848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.769817] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.769941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.769964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.769977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.769990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.770012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.779677] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.779774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.779797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.779811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.779824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.779846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.789772] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.789929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.789955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.789970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.789982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.790005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.799728] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.799849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.799872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.799885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.799898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.799920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.809874] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.809975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.809996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.810009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.810022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.810044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.819791] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.819884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.819907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.819920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.819932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.819954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:07.929 [2024-06-11 14:08:00.829864] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:07.929 [2024-06-11 14:08:00.829964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:07.929 [2024-06-11 14:08:00.829986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:07.929 [2024-06-11 14:08:00.829999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:07.929 [2024-06-11 14:08:00.830011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:07.929 [2024-06-11 14:08:00.830037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:07.929 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.839912] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.189 [2024-06-11 14:08:00.840005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.189 [2024-06-11 14:08:00.840028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.189 [2024-06-11 14:08:00.840042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.189 [2024-06-11 14:08:00.840053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.189 [2024-06-11 14:08:00.840076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.189 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.849873] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.189 [2024-06-11 14:08:00.849969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.189 [2024-06-11 14:08:00.849992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.189 [2024-06-11 14:08:00.850005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.189 [2024-06-11 14:08:00.850017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.189 [2024-06-11 14:08:00.850038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.189 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.859980] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.189 [2024-06-11 14:08:00.860075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.189 [2024-06-11 14:08:00.860097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.189 [2024-06-11 14:08:00.860110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.189 [2024-06-11 14:08:00.860122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.189 [2024-06-11 14:08:00.860144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.189 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.869940] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.189 [2024-06-11 14:08:00.870040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.189 [2024-06-11 14:08:00.870062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.189 [2024-06-11 14:08:00.870076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.189 [2024-06-11 14:08:00.870087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.189 [2024-06-11 14:08:00.870108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.189 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.880036] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.189 [2024-06-11 14:08:00.880133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.189 [2024-06-11 14:08:00.880159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.189 [2024-06-11 14:08:00.880173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.189 [2024-06-11 14:08:00.880185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.189 [2024-06-11 14:08:00.880206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.189 qpair failed and we were unable to recover it. 00:40:08.189 [2024-06-11 14:08:00.890095] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.890215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.890238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.890251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.890263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.890286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.900133] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.900230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.900253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.900266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.900278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.900300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.910034] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.910125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.910148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.910161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.910173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.910194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.920159] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.920315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.920337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.920350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.920362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.920388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.930213] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.930313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.930335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.930348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.930360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.930382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.940208] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.940308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.940330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.940345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.940357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.940379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.950151] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.950245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.950266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.950280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.950292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.950313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.960278] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.960397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.960422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.960435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.960447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.960470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.970295] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.970432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.970459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.970472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.970494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.970516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.980335] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.980431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.980454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.980468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.980496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.980519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:00.990395] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:00.990496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:00.990519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:00.990533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:00.990545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:00.990567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:01.000373] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:01.000487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:01.000510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:01.000523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:01.000535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:01.000557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:01.010497] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:01.010591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:01.010614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.190 [2024-06-11 14:08:01.010627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.190 [2024-06-11 14:08:01.010642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.190 [2024-06-11 14:08:01.010665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.190 qpair failed and we were unable to recover it. 00:40:08.190 [2024-06-11 14:08:01.020471] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.190 [2024-06-11 14:08:01.020573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.190 [2024-06-11 14:08:01.020596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.020609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.020621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.020643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.030508] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.030610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.030632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.030645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.030657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.030679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.040497] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.040595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.040617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.040630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.040642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.040664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.050544] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.050658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.050680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.050694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.050706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.050728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.060589] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.060688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.060709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.060722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.060734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.060755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.070582] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.070686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.070708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.070722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.070733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.070755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.080692] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.080840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.080863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.080877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.080888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.080911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.191 [2024-06-11 14:08:01.090659] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.191 [2024-06-11 14:08:01.090808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.191 [2024-06-11 14:08:01.090831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.191 [2024-06-11 14:08:01.090844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.191 [2024-06-11 14:08:01.090856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.191 [2024-06-11 14:08:01.090878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.191 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.100704] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.100865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.100887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.100900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.100916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.100938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.110719] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.110810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.110833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.110846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.110857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.110879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.120718] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.120807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.120829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.120842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.120854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.120875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.130827] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.130974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.130996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.131009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.131021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.131043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.140762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.140894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.140917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.140930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.140942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.140964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.451 [2024-06-11 14:08:01.150794] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.451 [2024-06-11 14:08:01.150891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.451 [2024-06-11 14:08:01.150914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.451 [2024-06-11 14:08:01.150927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.451 [2024-06-11 14:08:01.150938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.451 [2024-06-11 14:08:01.150960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.451 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.160850] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.160956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.160978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.160992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.161003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.161025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.170855] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.170948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.170970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.170983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.170995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.171018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.180891] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.180985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.181008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.181021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.181033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.181054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.190905] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.191062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.191085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.191098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.191113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.191136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.200865] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.201051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.201073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.201086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.201098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.201121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.210955] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.211049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.211071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.211084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.211097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.211119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.221002] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.221100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.221122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.221135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.221147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.221169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.231038] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.231142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.231164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.231177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.231189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.231210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.241070] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.241192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.241214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.241227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.241239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.241261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.251161] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.251308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.251330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.452 [2024-06-11 14:08:01.251342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.452 [2024-06-11 14:08:01.251354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.452 [2024-06-11 14:08:01.251376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.452 qpair failed and we were unable to recover it. 00:40:08.452 [2024-06-11 14:08:01.261115] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.452 [2024-06-11 14:08:01.261214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.452 [2024-06-11 14:08:01.261236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.261249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.261260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.261282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.271066] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.271213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.271235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.271249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.271261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.271283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.281123] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.281220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.281243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.281260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.281272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.281294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.291142] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.291234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.291257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.291270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.291282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.291303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.301237] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.301377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.301400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.301413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.301424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.301446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.311240] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.311336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.311358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.311371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.311384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.311406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.321286] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.321378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.321401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.321414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.321425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.321448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.331249] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.331404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.331427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.331440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.331452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.331473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.341291] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.341391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.341414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.341427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.341439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.341461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.453 [2024-06-11 14:08:01.351389] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.453 [2024-06-11 14:08:01.351491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.453 [2024-06-11 14:08:01.351515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.453 [2024-06-11 14:08:01.351528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.453 [2024-06-11 14:08:01.351540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.453 [2024-06-11 14:08:01.351562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.453 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.361448] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.361555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.361577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.361591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.361603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.361625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.371430] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.371529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.371551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.371568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.371580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.371602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.381448] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.381573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.381596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.381610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.381621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.381644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.391510] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.391610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.391633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.391647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.391658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.391680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.401462] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.401562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.401585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.401599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.401611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.401634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.411577] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.411738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.411761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.411774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.411786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.411822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.714 [2024-06-11 14:08:01.421600] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.714 [2024-06-11 14:08:01.421699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.714 [2024-06-11 14:08:01.421721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.714 [2024-06-11 14:08:01.421735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.714 [2024-06-11 14:08:01.421746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.714 [2024-06-11 14:08:01.421769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.714 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.431617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.431716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.431739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.431752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.431765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.431787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.441687] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.441791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.441814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.441828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.441840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.441861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.451679] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.451769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.451792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.451805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.451817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.451839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.461734] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.461832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.461853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.461870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.461882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.461905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.471826] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.471921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.471944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.471957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.471969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.471991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.481780] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.481920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.481944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.481957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.481969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.481991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.491822] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.491919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.491941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.491954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.491966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.491988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.501860] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.501952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.501974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.501986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.501998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.502020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.511892] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.715 [2024-06-11 14:08:01.512084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.715 [2024-06-11 14:08:01.512106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.715 [2024-06-11 14:08:01.512120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.715 [2024-06-11 14:08:01.512132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.715 [2024-06-11 14:08:01.512154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.715 qpair failed and we were unable to recover it. 00:40:08.715 [2024-06-11 14:08:01.521918] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.522014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.522037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.522050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.522062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.522084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.531951] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.532048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.532070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.532083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.532095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.532117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.541985] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.542177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.542199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.542213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.542225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.542247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.552039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.552139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.552161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.552179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.552191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.552212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.562043] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.562139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.562162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.562175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.562187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.562209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.572071] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.572162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.572185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.572198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.572210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.572232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.582123] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.582218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.582240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.582253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.582265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.582287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.592169] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.592322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.592344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.592357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.592369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.592391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.602185] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.602294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.716 [2024-06-11 14:08:01.602317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.716 [2024-06-11 14:08:01.602330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.716 [2024-06-11 14:08:01.602342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.716 [2024-06-11 14:08:01.602363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.716 qpair failed and we were unable to recover it. 00:40:08.716 [2024-06-11 14:08:01.612240] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.716 [2024-06-11 14:08:01.612349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.717 [2024-06-11 14:08:01.612371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.717 [2024-06-11 14:08:01.612384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.717 [2024-06-11 14:08:01.612396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.717 [2024-06-11 14:08:01.612418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.717 qpair failed and we were unable to recover it. 00:40:08.977 [2024-06-11 14:08:01.622231] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.977 [2024-06-11 14:08:01.622326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.977 [2024-06-11 14:08:01.622348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.977 [2024-06-11 14:08:01.622362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.977 [2024-06-11 14:08:01.622373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.977 [2024-06-11 14:08:01.622396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.977 qpair failed and we were unable to recover it. 00:40:08.977 [2024-06-11 14:08:01.632256] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.977 [2024-06-11 14:08:01.632353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.977 [2024-06-11 14:08:01.632375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.977 [2024-06-11 14:08:01.632389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.977 [2024-06-11 14:08:01.632400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.977 [2024-06-11 14:08:01.632422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.977 qpair failed and we were unable to recover it. 00:40:08.977 [2024-06-11 14:08:01.642330] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.977 [2024-06-11 14:08:01.642428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.977 [2024-06-11 14:08:01.642454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.977 [2024-06-11 14:08:01.642467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.977 [2024-06-11 14:08:01.642484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.977 [2024-06-11 14:08:01.642506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.977 qpair failed and we were unable to recover it. 00:40:08.977 [2024-06-11 14:08:01.652489] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.652595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.652617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.652630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.652642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.652664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.662417] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.662521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.662544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.662556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.662568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.662590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.672434] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.672532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.672555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.672568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.672580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.672602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.682425] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.682529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.682553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.682566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.682578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.682599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.692440] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.692543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.692565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.692578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.692590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.692613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.702492] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.702653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.702675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.702689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.702700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.702723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.712425] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.712530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.712553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.712566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.712578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.712600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.722602] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.722699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.722722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.722735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.722747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.722769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.732558] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.732649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.732675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.732688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.732700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.732722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.742605] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.742719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.742741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.742755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.742766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.742788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.752622] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.752735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.752757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.752770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.752781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.752803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.762631] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.762790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.762812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.762825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.762837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.762859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.772686] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.772781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.978 [2024-06-11 14:08:01.772803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.978 [2024-06-11 14:08:01.772816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.978 [2024-06-11 14:08:01.772828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.978 [2024-06-11 14:08:01.772853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.978 qpair failed and we were unable to recover it. 00:40:08.978 [2024-06-11 14:08:01.782843] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.978 [2024-06-11 14:08:01.782952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.782974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.782987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.782999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.783021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.792754] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.792922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.792944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.792957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.792969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.792991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.802779] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.802886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.802908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.802921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.802933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.802954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.812787] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.812877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.812902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.812915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.812927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.812950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.822849] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.822952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.822978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.822991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.823003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.823024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.832884] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.832988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.833010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.833023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.833035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.833056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.842879] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.842971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.842994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.843007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.843019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.843040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.852965] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.853105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.853127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.853140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.853152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.853175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.862944] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.863035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.863057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.863071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.863082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.863108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.872928] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.873060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.873083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.873096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.873108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.873129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:08.979 [2024-06-11 14:08:01.883001] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:08.979 [2024-06-11 14:08:01.883088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:08.979 [2024-06-11 14:08:01.883111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:08.979 [2024-06-11 14:08:01.883124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:08.979 [2024-06-11 14:08:01.883136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:08.979 [2024-06-11 14:08:01.883159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:08.979 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.893027] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.893118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.893140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.893153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.893165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.893187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.902987] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.903080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.903103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.903116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.903128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.903150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.913093] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.913248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.913275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.913289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.913300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.913322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.923156] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.923248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.923271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.923284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.923295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.923317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.933127] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.933218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.933241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.933254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.933266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.933288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.943206] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.943313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.943336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.943349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.943361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.943383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.953227] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.953387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.953412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.953425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.953441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.953465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.240 qpair failed and we were unable to recover it. 00:40:09.240 [2024-06-11 14:08:01.963282] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.240 [2024-06-11 14:08:01.963378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.240 [2024-06-11 14:08:01.963401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.240 [2024-06-11 14:08:01.963414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.240 [2024-06-11 14:08:01.963425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.240 [2024-06-11 14:08:01.963448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:01.973257] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:01.973405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:01.973428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:01.973441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:01.973453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:01.973481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:01.983308] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:01.983403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:01.983426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:01.983439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:01.983451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:01.983473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:01.993327] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:01.993430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:01.993453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:01.993466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:01.993482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:01.993505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.003432] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.003531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.003561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.003574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.003586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.003609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.013316] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.013439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.013462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.013479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.013492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.013514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.023404] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.023507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.023530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.023543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.023555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.023577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.033449] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.033611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.033634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.033647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.033659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.033681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.043485] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.043583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.043606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.043619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.043634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.043656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.053537] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.053634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.053657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.053670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.053682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.053704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.063543] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.063643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.063665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.063679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.063690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.063712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.073568] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.073668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.073691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.073704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.073715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.073737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.083516] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.083610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.083633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.083646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.083658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.083680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.093565] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.093655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.241 [2024-06-11 14:08:02.093678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.241 [2024-06-11 14:08:02.093691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.241 [2024-06-11 14:08:02.093703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.241 [2024-06-11 14:08:02.093724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.241 qpair failed and we were unable to recover it. 00:40:09.241 [2024-06-11 14:08:02.103653] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.241 [2024-06-11 14:08:02.103833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.242 [2024-06-11 14:08:02.103854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.242 [2024-06-11 14:08:02.103867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.242 [2024-06-11 14:08:02.103879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.242 [2024-06-11 14:08:02.103901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.242 qpair failed and we were unable to recover it. 00:40:09.242 [2024-06-11 14:08:02.113711] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.242 [2024-06-11 14:08:02.113810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.242 [2024-06-11 14:08:02.113832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.242 [2024-06-11 14:08:02.113844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.242 [2024-06-11 14:08:02.113856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.242 [2024-06-11 14:08:02.113878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.242 qpair failed and we were unable to recover it. 00:40:09.242 [2024-06-11 14:08:02.123693] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.242 [2024-06-11 14:08:02.123793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.242 [2024-06-11 14:08:02.123816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.242 [2024-06-11 14:08:02.123829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.242 [2024-06-11 14:08:02.123841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.242 [2024-06-11 14:08:02.123864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.242 qpair failed and we were unable to recover it. 00:40:09.242 [2024-06-11 14:08:02.133742] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.242 [2024-06-11 14:08:02.133838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.242 [2024-06-11 14:08:02.133861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.242 [2024-06-11 14:08:02.133874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.242 [2024-06-11 14:08:02.133889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.242 [2024-06-11 14:08:02.133912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.242 qpair failed and we were unable to recover it. 00:40:09.242 [2024-06-11 14:08:02.143849] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.242 [2024-06-11 14:08:02.143946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.242 [2024-06-11 14:08:02.143968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.242 [2024-06-11 14:08:02.143982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.242 [2024-06-11 14:08:02.143993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.242 [2024-06-11 14:08:02.144015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.242 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.153871] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.153967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.153990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.154003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.154015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.154038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.163815] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.163914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.163936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.163949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.163961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.163982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.173799] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.173892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.173915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.173928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.173940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.173962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.183903] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.184001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.184025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.184038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.184050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.184072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.193896] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.194053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.194076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.194089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.194101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.194124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.203857] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.203948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.203971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.203984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.203996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.204018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.213928] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.214024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.214047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.214060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.214072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.214093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.223925] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.224019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.224042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.224055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.224070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.224093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.233989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.234087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.234109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.234122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.234134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.502 [2024-06-11 14:08:02.234155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.502 qpair failed and we were unable to recover it. 00:40:09.502 [2024-06-11 14:08:02.244050] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.502 [2024-06-11 14:08:02.244144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.502 [2024-06-11 14:08:02.244167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.502 [2024-06-11 14:08:02.244180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.502 [2024-06-11 14:08:02.244191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.244214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.253998] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.254095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.254117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.254130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.254142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.254164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.264092] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.264190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.264212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.264225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.264237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.264258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.274050] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.274146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.274170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.274183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.274195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.274217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.284161] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.284256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.284278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.284291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.284303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.284326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.294175] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.294307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.294329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.294342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.294355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.294377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.304227] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.304323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.304346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.304359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.304372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.304395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.314289] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.314381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.314404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.314422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.314434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.314456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.324270] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.324398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.324421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.324434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.324445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.324468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.334301] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.334394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.334418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.334431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.334442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.334464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.344420] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.344522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.344545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.344558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.344569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.344592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.354385] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.354548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.354571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.354584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.354595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.354618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.364449] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.364580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.364603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.364616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.364628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.364650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.374396] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.374516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.503 [2024-06-11 14:08:02.374539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.503 [2024-06-11 14:08:02.374552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.503 [2024-06-11 14:08:02.374564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.503 [2024-06-11 14:08:02.374588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.503 qpair failed and we were unable to recover it. 00:40:09.503 [2024-06-11 14:08:02.384433] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.503 [2024-06-11 14:08:02.384564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.504 [2024-06-11 14:08:02.384587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.504 [2024-06-11 14:08:02.384600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.504 [2024-06-11 14:08:02.384612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.504 [2024-06-11 14:08:02.384635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.504 qpair failed and we were unable to recover it. 00:40:09.504 [2024-06-11 14:08:02.394501] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.504 [2024-06-11 14:08:02.394595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.504 [2024-06-11 14:08:02.394617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.504 [2024-06-11 14:08:02.394630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.504 [2024-06-11 14:08:02.394642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.504 [2024-06-11 14:08:02.394663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.504 qpair failed and we were unable to recover it. 00:40:09.504 [2024-06-11 14:08:02.404543] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.504 [2024-06-11 14:08:02.404698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.504 [2024-06-11 14:08:02.404721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.504 [2024-06-11 14:08:02.404738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.504 [2024-06-11 14:08:02.404750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.504 [2024-06-11 14:08:02.404772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.504 qpair failed and we were unable to recover it. 00:40:09.765 [2024-06-11 14:08:02.414562] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.765 [2024-06-11 14:08:02.414653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.765 [2024-06-11 14:08:02.414676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.765 [2024-06-11 14:08:02.414689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.765 [2024-06-11 14:08:02.414701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.765 [2024-06-11 14:08:02.414723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.765 qpair failed and we were unable to recover it. 00:40:09.765 [2024-06-11 14:08:02.424608] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.765 [2024-06-11 14:08:02.424708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.765 [2024-06-11 14:08:02.424730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.765 [2024-06-11 14:08:02.424743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.765 [2024-06-11 14:08:02.424755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.765 [2024-06-11 14:08:02.424777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.765 qpair failed and we were unable to recover it. 00:40:09.765 [2024-06-11 14:08:02.434701] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.765 [2024-06-11 14:08:02.434803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.765 [2024-06-11 14:08:02.434826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.765 [2024-06-11 14:08:02.434839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.434850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.434872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.444644] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.444744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.444767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.444780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.444792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.444814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.454611] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.454705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.454727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.454740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.454752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.454774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.464735] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.464831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.464852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.464865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.464877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.464898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.474760] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.474857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.474879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.474893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.474904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.474925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.484766] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.484896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.484919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.484932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.484943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.484965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.494901] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.495001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.495023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.495040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.495052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.495074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.504826] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.504922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.504945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.504958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.504969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.504991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.514795] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.514926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.514948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.514961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.514973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.514995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.524904] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.524999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.525022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.525036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.525048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.525070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.534908] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.535008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.535031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.535044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.535056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.535078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.544959] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.545056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.545078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.545091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.545103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.545125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.766 [2024-06-11 14:08:02.554906] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.766 [2024-06-11 14:08:02.555000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.766 [2024-06-11 14:08:02.555023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.766 [2024-06-11 14:08:02.555036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.766 [2024-06-11 14:08:02.555048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.766 [2024-06-11 14:08:02.555070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.766 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.565001] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.565091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.565114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.565128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.565140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.565162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.574980] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.575073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.575096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.575109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.575121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.575142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.585088] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.585193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.585219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.585232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.585245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.585266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.595100] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.595195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.595217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.595231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.595243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.595265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.605153] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.605245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.605268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.605281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.605293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.605314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.615115] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.615244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.615267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.615280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.615292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.615313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.625264] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.625362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.625385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.625398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.625409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.625432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.635141] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.635292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.635315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.635328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.635340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.635363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.645299] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.645399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.645422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.645435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.645447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.645469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.655258] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.655370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.655393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.655406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.655418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.655440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:09.767 [2024-06-11 14:08:02.665319] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:09.767 [2024-06-11 14:08:02.665430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:09.767 [2024-06-11 14:08:02.665453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:09.767 [2024-06-11 14:08:02.665466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:09.767 [2024-06-11 14:08:02.665483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:09.767 [2024-06-11 14:08:02.665505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:09.767 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.675257] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.675379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.675406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.675419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.675431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.675452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.685352] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.685451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.685473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.685493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.685504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.685527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.695399] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.695504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.695527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.695540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.695552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.695573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.705431] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.705531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.705554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.705568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.705580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.705603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.715459] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.715554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.715577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.715590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.715601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.715628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.725504] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.725658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.725681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.725695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.725706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.725730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.735512] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.735666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.735688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.735702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.735714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.735736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.745551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.028 [2024-06-11 14:08:02.745648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.028 [2024-06-11 14:08:02.745670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.028 [2024-06-11 14:08:02.745683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.028 [2024-06-11 14:08:02.745695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.028 [2024-06-11 14:08:02.745716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.028 qpair failed and we were unable to recover it. 00:40:10.028 [2024-06-11 14:08:02.755564] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.755667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.755688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.755701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.755713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.755735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.765528] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.765616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.765645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.765658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.765670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.765692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.775615] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.775712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.775735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.775749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.775760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.775782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.785693] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.785789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.785811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.785824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.785836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.785858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.795678] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.795777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.795800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.795813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.795824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.795846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.805706] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.805844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.805867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.805881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.805892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.805918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.815733] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.815881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.815904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.815917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.815929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.815951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.825704] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.825833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.825855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.825869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.825880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.825902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.835836] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.835938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.835961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.835974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.835986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.836008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.845823] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.845933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.845955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.845968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.845980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.846002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.855919] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.029 [2024-06-11 14:08:02.856010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.029 [2024-06-11 14:08:02.856036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.029 [2024-06-11 14:08:02.856049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.029 [2024-06-11 14:08:02.856061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.029 [2024-06-11 14:08:02.856085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.029 qpair failed and we were unable to recover it. 00:40:10.029 [2024-06-11 14:08:02.865868] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.865964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.865986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.865999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.866011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.866033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.875914] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.876015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.876037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.876050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.876062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.876084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.885961] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.886065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.886088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.886101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.886113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.886135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.896006] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.896109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.896132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.896145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.896156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.896181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.906138] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.906264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.906287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.906300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.906312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.906334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.916032] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.916127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.916150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.916163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.916175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.916198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.926004] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.030 [2024-06-11 14:08:02.926096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.030 [2024-06-11 14:08:02.926119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.030 [2024-06-11 14:08:02.926132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.030 [2024-06-11 14:08:02.926145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.030 [2024-06-11 14:08:02.926167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.030 qpair failed and we were unable to recover it. 00:40:10.030 [2024-06-11 14:08:02.936121] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.936214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.936237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.936250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.936261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.936283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.946116] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.946214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.946240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.946253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.946265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.946288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.956204] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.956306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.956331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.956344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.956356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.956380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.966185] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.966289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.966313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.966326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.966338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.966359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.976197] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.976290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.976313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.976326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.976338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.976361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.986243] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.986341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.986364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.986377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.986392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.986415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:02.996377] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:02.996481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:02.996504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:02.996517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:02.996529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:02.996551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:03.006296] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:03.006427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:03.006450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:03.006463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:03.006482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:03.006505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:03.016317] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:03.016410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:03.016432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:03.016445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:03.016457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:03.016487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:03.026364] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:03.026497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:03.026519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:03.026533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:03.026545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:03.026567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:03.036379] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:03.036474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.291 [2024-06-11 14:08:03.036501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.291 [2024-06-11 14:08:03.036514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.291 [2024-06-11 14:08:03.036526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.291 [2024-06-11 14:08:03.036548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.291 qpair failed and we were unable to recover it. 00:40:10.291 [2024-06-11 14:08:03.046395] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.291 [2024-06-11 14:08:03.046495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.046519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.046532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.046544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.046566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.056448] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.056549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.056571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.056584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.056596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.056618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.066516] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.066613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.066636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.066649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.066661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.066683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.076519] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.076613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.076635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.076648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.076663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.076685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.086568] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.086682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.086705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.086718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.086730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.086753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.096541] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.096637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.096660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.096673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.096685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.096706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.106676] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.106774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.106797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.106810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.106822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.106844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.116611] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.116762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.116784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.116797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.116809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.116831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.126627] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.126725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.126748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.126761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.126773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.126794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.136666] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.136779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.136801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.136814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.136826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.136848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.146711] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.146839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.146861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.146874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.146886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.146907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.156756] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.156853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.156875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.156888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.156900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.292 [2024-06-11 14:08:03.156922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.292 qpair failed and we were unable to recover it. 00:40:10.292 [2024-06-11 14:08:03.166735] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.292 [2024-06-11 14:08:03.166824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.292 [2024-06-11 14:08:03.166846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.292 [2024-06-11 14:08:03.166859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.292 [2024-06-11 14:08:03.166875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.293 [2024-06-11 14:08:03.166897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.293 qpair failed and we were unable to recover it. 00:40:10.293 [2024-06-11 14:08:03.176827] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.293 [2024-06-11 14:08:03.176931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.293 [2024-06-11 14:08:03.176953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.293 [2024-06-11 14:08:03.176966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.293 [2024-06-11 14:08:03.176978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.293 [2024-06-11 14:08:03.177000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.293 qpair failed and we were unable to recover it. 00:40:10.293 [2024-06-11 14:08:03.186842] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.293 [2024-06-11 14:08:03.186943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.293 [2024-06-11 14:08:03.186966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.293 [2024-06-11 14:08:03.186979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.293 [2024-06-11 14:08:03.186990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.293 [2024-06-11 14:08:03.187012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.293 qpair failed and we were unable to recover it. 00:40:10.293 [2024-06-11 14:08:03.196863] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.293 [2024-06-11 14:08:03.196961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.293 [2024-06-11 14:08:03.196984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.293 [2024-06-11 14:08:03.196996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.293 [2024-06-11 14:08:03.197008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.293 [2024-06-11 14:08:03.197030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.293 qpair failed and we were unable to recover it. 00:40:10.553 [2024-06-11 14:08:03.206880] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.553 [2024-06-11 14:08:03.206988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.553 [2024-06-11 14:08:03.207012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.553 [2024-06-11 14:08:03.207025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.553 [2024-06-11 14:08:03.207037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.553 [2024-06-11 14:08:03.207059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.553 qpair failed and we were unable to recover it. 00:40:10.553 [2024-06-11 14:08:03.216896] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.553 [2024-06-11 14:08:03.217047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.553 [2024-06-11 14:08:03.217070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.553 [2024-06-11 14:08:03.217083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.553 [2024-06-11 14:08:03.217094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.553 [2024-06-11 14:08:03.217117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.553 qpair failed and we were unable to recover it. 00:40:10.553 [2024-06-11 14:08:03.226949] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.227048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.227071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.227083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.227095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.227117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.236959] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.237092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.237114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.237127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.237139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.237161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.247007] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.247097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.247120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.247133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.247145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.247167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.257027] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.257123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.257145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.257162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.257174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.257196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.267065] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.267159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.267181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.267194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.267206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.267227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.277025] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.277127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.277149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.277162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.277174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.277196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.287132] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.287228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.287251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.287264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.287275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.287297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.297148] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.297303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.297325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.297339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.297350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.297372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.307178] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.307270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.307293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.307307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.307319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.307341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.317142] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.317238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.317259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.317272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.317284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.317306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.327197] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.327368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.327391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.327405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.327416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.327438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.337257] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.337364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.337387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.337400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.337411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.337433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.347295] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.347387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.347410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.347427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.554 [2024-06-11 14:08:03.347438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.554 [2024-06-11 14:08:03.347460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.554 qpair failed and we were unable to recover it. 00:40:10.554 [2024-06-11 14:08:03.357240] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.554 [2024-06-11 14:08:03.357346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.554 [2024-06-11 14:08:03.357368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.554 [2024-06-11 14:08:03.357381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.357393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.357415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.367347] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.367542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.367566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.367579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.367591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.367615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.377373] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.377470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.377497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.377510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.377522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.377545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.387407] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.387681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.387704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.387717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.387729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.387752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.397528] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.397621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.397643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.397656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.397668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.397690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.407450] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.407551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.407574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.407587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.407598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.407621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.417457] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.417604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.417627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.417640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.417652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.417675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.427567] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.427670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.427692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.427706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.427717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.427739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.437602] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.437694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.437716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.437733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.437745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.437767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.447573] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.447690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.447713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.447726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.447738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.447760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.555 [2024-06-11 14:08:03.457683] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.555 [2024-06-11 14:08:03.457777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.555 [2024-06-11 14:08:03.457800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.555 [2024-06-11 14:08:03.457813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.555 [2024-06-11 14:08:03.457825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.555 [2024-06-11 14:08:03.457847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.555 qpair failed and we were unable to recover it. 00:40:10.815 [2024-06-11 14:08:03.467557] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.467653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.467674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.467687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.467699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.467720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.477655] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.477749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.477771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.477784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.477796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.477818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.487671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.487768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.487791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.487804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.487816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.487838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.497671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.497810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.497832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.497846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.497858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.497879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.507768] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.507869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.507891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.507905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.507917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.507938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.517797] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.517898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.517921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.517934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.517946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.517968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.527809] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.527902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.527931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.527945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.527957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.527978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.537836] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.537930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.537952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.537966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.537977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.537999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.547879] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.547972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.547995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.548008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.548020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.548042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.557933] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.558028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.558050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.558063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.558075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.816 [2024-06-11 14:08:03.558097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.816 qpair failed and we were unable to recover it. 00:40:10.816 [2024-06-11 14:08:03.567997] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.816 [2024-06-11 14:08:03.568097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.816 [2024-06-11 14:08:03.568119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.816 [2024-06-11 14:08:03.568132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.816 [2024-06-11 14:08:03.568145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.568167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.577945] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.578038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.578060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.578074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.578086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.578108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.587924] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.588037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.588059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.588072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.588083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.588105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.598054] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.598155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.598177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.598190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.598202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.598224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.608060] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.608153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.608176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.608189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.608201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.608223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.618060] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.618164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.618190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.618204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.618215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.618237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.628068] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.628222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.628245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.628259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.628270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.628293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.638144] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.638299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.638321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.638335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.638346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.638369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.648140] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.648241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.648263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.648276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.648289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.648312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.658157] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.658253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.658275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.658288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.658302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.658327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.668198] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.668297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.668319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.668333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.668345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.668367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.678212] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.817 [2024-06-11 14:08:03.678343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.817 [2024-06-11 14:08:03.678365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.817 [2024-06-11 14:08:03.678379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.817 [2024-06-11 14:08:03.678391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.817 [2024-06-11 14:08:03.678414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.817 qpair failed and we were unable to recover it. 00:40:10.817 [2024-06-11 14:08:03.688271] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.818 [2024-06-11 14:08:03.688379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.818 [2024-06-11 14:08:03.688402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.818 [2024-06-11 14:08:03.688416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.818 [2024-06-11 14:08:03.688428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.818 [2024-06-11 14:08:03.688450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.818 qpair failed and we were unable to recover it. 00:40:10.818 [2024-06-11 14:08:03.698235] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.818 [2024-06-11 14:08:03.698323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.818 [2024-06-11 14:08:03.698345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.818 [2024-06-11 14:08:03.698358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.818 [2024-06-11 14:08:03.698369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.818 [2024-06-11 14:08:03.698392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.818 qpair failed and we were unable to recover it. 00:40:10.818 [2024-06-11 14:08:03.708341] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.818 [2024-06-11 14:08:03.708440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.818 [2024-06-11 14:08:03.708466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.818 [2024-06-11 14:08:03.708485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.818 [2024-06-11 14:08:03.708498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.818 [2024-06-11 14:08:03.708521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.818 qpair failed and we were unable to recover it. 00:40:10.818 [2024-06-11 14:08:03.718283] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:10.818 [2024-06-11 14:08:03.718453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:10.818 [2024-06-11 14:08:03.718480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:10.818 [2024-06-11 14:08:03.718493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:10.818 [2024-06-11 14:08:03.718505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:10.818 [2024-06-11 14:08:03.718528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:10.818 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.728304] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.728407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.728429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.728442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.728454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.728482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.738443] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.738595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.738617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.738630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.738642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.738665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.748393] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.748496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.748518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.748532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.748543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.748570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.758387] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.758499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.758521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.758535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.758546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.758568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.768468] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.768570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.768593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.768606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.768618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.768640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.778510] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.778601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.778623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.778636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.778648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.778669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.788584] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.788678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.788701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.788714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.788726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.788748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.798506] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.079 [2024-06-11 14:08:03.798631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.079 [2024-06-11 14:08:03.798657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.079 [2024-06-11 14:08:03.798671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.079 [2024-06-11 14:08:03.798682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.079 [2024-06-11 14:08:03.798704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.079 qpair failed and we were unable to recover it. 00:40:11.079 [2024-06-11 14:08:03.808541] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.808635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.808657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.808670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.808682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.808705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.818640] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.818729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.818751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.818765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.818777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.818799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.828648] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.828807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.828829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.828842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.828854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.828875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.838685] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.838866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.838889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.838902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.838914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.838940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.848719] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.848818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.848840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.848853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.848865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.848887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.858885] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.858995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.859018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.859031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.859043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.859065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.868881] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.869012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.869033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.869047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.869058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.869080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.878853] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.878946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.878967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.878980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.878992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.879014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.888947] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.889106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.889132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.889146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.889157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.889179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.898868] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.899044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.899066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.899080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.899091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.899113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.908906] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.909003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.909026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.909039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.909051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.909073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.918955] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.919075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.919097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.919110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.919122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.919144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.928957] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.929052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.929075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.929088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.929104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.080 [2024-06-11 14:08:03.929126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.080 qpair failed and we were unable to recover it. 00:40:11.080 [2024-06-11 14:08:03.938975] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.080 [2024-06-11 14:08:03.939066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.080 [2024-06-11 14:08:03.939089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.080 [2024-06-11 14:08:03.939102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.080 [2024-06-11 14:08:03.939114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.081 [2024-06-11 14:08:03.939135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.081 qpair failed and we were unable to recover it. 00:40:11.081 [2024-06-11 14:08:03.948956] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.081 [2024-06-11 14:08:03.949087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.081 [2024-06-11 14:08:03.949109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.081 [2024-06-11 14:08:03.949122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.081 [2024-06-11 14:08:03.949133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.081 [2024-06-11 14:08:03.949155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.081 qpair failed and we were unable to recover it. 00:40:11.081 [2024-06-11 14:08:03.958962] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.081 [2024-06-11 14:08:03.959132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.081 [2024-06-11 14:08:03.959156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.081 [2024-06-11 14:08:03.959169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.081 [2024-06-11 14:08:03.959181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.081 [2024-06-11 14:08:03.959204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.081 qpair failed and we were unable to recover it. 00:40:11.081 [2024-06-11 14:08:03.969037] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.081 [2024-06-11 14:08:03.969224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.081 [2024-06-11 14:08:03.969247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.081 [2024-06-11 14:08:03.969261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.081 [2024-06-11 14:08:03.969272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.081 [2024-06-11 14:08:03.969295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.081 qpair failed and we were unable to recover it. 00:40:11.081 [2024-06-11 14:08:03.979169] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.081 [2024-06-11 14:08:03.979319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.081 [2024-06-11 14:08:03.979342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.081 [2024-06-11 14:08:03.979355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.081 [2024-06-11 14:08:03.979367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.081 [2024-06-11 14:08:03.979389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.081 qpair failed and we were unable to recover it. 00:40:11.341 [2024-06-11 14:08:03.989266] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.341 [2024-06-11 14:08:03.989361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.341 [2024-06-11 14:08:03.989384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.341 [2024-06-11 14:08:03.989397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.341 [2024-06-11 14:08:03.989408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.341 [2024-06-11 14:08:03.989431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.341 qpair failed and we were unable to recover it. 00:40:11.341 [2024-06-11 14:08:03.999132] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.341 [2024-06-11 14:08:03.999262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.341 [2024-06-11 14:08:03.999284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.341 [2024-06-11 14:08:03.999297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.341 [2024-06-11 14:08:03.999310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.341 [2024-06-11 14:08:03.999332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.341 qpair failed and we were unable to recover it. 00:40:11.341 [2024-06-11 14:08:04.009175] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.009266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.009288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.009302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.009314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.009336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.019153] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.019246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.019268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.019281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.019297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.019319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.029195] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.029319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.029341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.029355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.029366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.029388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.039322] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.039475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.039503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.039516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.039528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.039551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.049292] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.049385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.049407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.049420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.049432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.049454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.059386] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.059508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.059531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.059544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.059556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.059579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.069392] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.069556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.069580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.069593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.069605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.069628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.079404] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.079508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.079531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.079544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.079556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.079578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.089427] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.089527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.089550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.089564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.089575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.089597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.099494] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.099590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.099613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.099626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.099638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.099660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.109435] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.109531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.109554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.109567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.109582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.109605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.119469] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.119572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.119594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.119607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.119619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.119641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.129496] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.129592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.129615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.129628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.342 [2024-06-11 14:08:04.129639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.342 [2024-06-11 14:08:04.129662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.342 qpair failed and we were unable to recover it. 00:40:11.342 [2024-06-11 14:08:04.139511] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.342 [2024-06-11 14:08:04.139661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.342 [2024-06-11 14:08:04.139683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.342 [2024-06-11 14:08:04.139697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.139708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.139730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.149616] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.149708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.149731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.149744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.149756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.149779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.159649] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.159809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.159832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.159846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.159857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.159880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.169625] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.169718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.169741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.169754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.169766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.169788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.179635] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.179730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.179752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.179765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.179777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.179799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.189717] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.189868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.189891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.189904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.189916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.189938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.199757] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.199849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.199871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.199888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.199900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.199922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.209795] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.209888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.209910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.209923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.209935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.209957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.219821] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.219931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.219954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.219967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.219979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.220001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.229832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.229928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.229951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.229964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.229976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.229998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.343 [2024-06-11 14:08:04.239808] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.343 [2024-06-11 14:08:04.239905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.343 [2024-06-11 14:08:04.239927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.343 [2024-06-11 14:08:04.239940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.343 [2024-06-11 14:08:04.239952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.343 [2024-06-11 14:08:04.239974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.343 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.249882] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.249974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.249997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.250010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.250022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.250044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.259944] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.260050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.260073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.260086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.260098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.260120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.269978] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.270076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.270098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.270111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.270123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.270144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.279993] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.280101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.280123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.280136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.280148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.280170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.290024] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.290115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.290138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.290158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.290170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.290191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.300087] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.300243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.300265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.300278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.300290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.300311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.310170] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.310318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.310340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.310353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.310365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.310387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.320110] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.320206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.320229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.320242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.320254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.320276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.330123] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.330221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.330244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.330257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.330269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.330291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.340154] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.340250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.340272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.340285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.340297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.340319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.350217] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.350361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.350383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.350396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.350408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.350430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.604 qpair failed and we were unable to recover it. 00:40:11.604 [2024-06-11 14:08:04.360244] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.604 [2024-06-11 14:08:04.360344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.604 [2024-06-11 14:08:04.360367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.604 [2024-06-11 14:08:04.360380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.604 [2024-06-11 14:08:04.360392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.604 [2024-06-11 14:08:04.360414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.370280] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.370366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.370388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.370401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.370413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.370435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.380252] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.380393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.380415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.380432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.380443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.380466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.390345] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.390500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.390523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.390536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.390548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.390571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.400433] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.400567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.400589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.400603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.400614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.400636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.410304] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.410401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.410424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.410437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.410449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.410471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.420403] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.420497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.420520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.420533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.420545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.420567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.430460] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.430555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.430578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.430591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.430603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.430625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.440458] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.440570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.440592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.440605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.440617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.440640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.450568] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.450664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.450686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.450700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.450711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.450735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.460547] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.460673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.460695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.460709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.460720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.460742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.470497] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.470589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.470613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.470626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.470638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.470660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.480594] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.480695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.480717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.480730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.480742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.480764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.490694] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.490791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.605 [2024-06-11 14:08:04.490813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.605 [2024-06-11 14:08:04.490827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.605 [2024-06-11 14:08:04.490838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.605 [2024-06-11 14:08:04.490860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.605 qpair failed and we were unable to recover it. 00:40:11.605 [2024-06-11 14:08:04.500688] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.605 [2024-06-11 14:08:04.500781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.606 [2024-06-11 14:08:04.500803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.606 [2024-06-11 14:08:04.500816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.606 [2024-06-11 14:08:04.500827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.606 [2024-06-11 14:08:04.500849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.606 qpair failed and we were unable to recover it. 00:40:11.606 [2024-06-11 14:08:04.510676] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.606 [2024-06-11 14:08:04.510772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.606 [2024-06-11 14:08:04.510794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.606 [2024-06-11 14:08:04.510807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.606 [2024-06-11 14:08:04.510819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.606 [2024-06-11 14:08:04.510842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.606 qpair failed and we were unable to recover it. 00:40:11.866 [2024-06-11 14:08:04.520777] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.866 [2024-06-11 14:08:04.520880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.866 [2024-06-11 14:08:04.520902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.866 [2024-06-11 14:08:04.520915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.866 [2024-06-11 14:08:04.520927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.866 [2024-06-11 14:08:04.520949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.866 qpair failed and we were unable to recover it. 00:40:11.866 [2024-06-11 14:08:04.530748] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.866 [2024-06-11 14:08:04.530846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.866 [2024-06-11 14:08:04.530870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.866 [2024-06-11 14:08:04.530883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.866 [2024-06-11 14:08:04.530895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.866 [2024-06-11 14:08:04.530916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.866 qpair failed and we were unable to recover it. 00:40:11.866 [2024-06-11 14:08:04.540789] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.866 [2024-06-11 14:08:04.540886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.866 [2024-06-11 14:08:04.540909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.866 [2024-06-11 14:08:04.540922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.866 [2024-06-11 14:08:04.540934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.540956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.550837] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.550937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.550960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.550973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.550984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.551006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.560817] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.560913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.560939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.560952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.560964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.560986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.570822] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.570971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.570994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.571007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.571019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.571041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.580884] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.580985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.581007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.581021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.581032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.581054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.590820] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.590949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.590972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.590985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.590997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.591018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.600934] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.601033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.601056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.601069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.601081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.601106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.610901] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.610996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.611018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.611031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.611043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.611066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.620922] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.621016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.621038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.621052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.621063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.621086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.631024] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.631121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.631143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.631156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.631168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.631191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.641071] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.641158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.641181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.641194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.641206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.641228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.651067] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.651157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.651184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.651197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.651209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.651230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.661086] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.661181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.661204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.661217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.661229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.661251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.671126] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.671241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.867 [2024-06-11 14:08:04.671263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.867 [2024-06-11 14:08:04.671276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.867 [2024-06-11 14:08:04.671288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.867 [2024-06-11 14:08:04.671310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.867 qpair failed and we were unable to recover it. 00:40:11.867 [2024-06-11 14:08:04.681254] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.867 [2024-06-11 14:08:04.681445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.681468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.681495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.681507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.681530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.691112] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.691212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.691235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.691248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.691259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.691285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.701224] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.701332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.701355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.701368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.701380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.701401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.711268] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.711365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.711387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.711400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.711412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.711434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.721260] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.721382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.721404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.721417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.721429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.721451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.731290] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.731382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.731404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.731417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.731429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.731451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.741314] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.741406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.741432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.741445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.741457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.741483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.751289] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.751385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.751407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.751420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.751432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.751455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.761362] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.761460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.761487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.761501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.761513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.761535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:11.868 [2024-06-11 14:08:04.771462] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:11.868 [2024-06-11 14:08:04.771561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:11.868 [2024-06-11 14:08:04.771584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:11.868 [2024-06-11 14:08:04.771597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:11.868 [2024-06-11 14:08:04.771609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:11.868 [2024-06-11 14:08:04.771631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:11.868 qpair failed and we were unable to recover it. 00:40:12.128 [2024-06-11 14:08:04.781412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.128 [2024-06-11 14:08:04.781534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.128 [2024-06-11 14:08:04.781557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.128 [2024-06-11 14:08:04.781570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.128 [2024-06-11 14:08:04.781581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:12.128 [2024-06-11 14:08:04.781607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.128 qpair failed and we were unable to recover it. 00:40:12.128 [2024-06-11 14:08:04.791502] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.128 [2024-06-11 14:08:04.791598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.128 [2024-06-11 14:08:04.791620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.128 [2024-06-11 14:08:04.791633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.128 [2024-06-11 14:08:04.791645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf6f80 00:40:12.128 [2024-06-11 14:08:04.791667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:12.128 qpair failed and we were unable to recover it. 00:40:12.128 [2024-06-11 14:08:04.801581] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.128 [2024-06-11 14:08:04.801769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.128 [2024-06-11 14:08:04.801836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.128 [2024-06-11 14:08:04.801874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.128 [2024-06-11 14:08:04.801906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff238000b90 00:40:12.128 [2024-06-11 14:08:04.801969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:12.128 qpair failed and we were unable to recover it. 00:40:12.128 [2024-06-11 14:08:04.811583] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:12.128 [2024-06-11 14:08:04.811706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:12.128 [2024-06-11 14:08:04.811743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:12.128 [2024-06-11 14:08:04.811766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.128 [2024-06-11 14:08:04.811786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff238000b90 00:40:12.128 [2024-06-11 14:08:04.811825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:12.128 qpair failed and we were unable to recover it. 00:40:12.128 [2024-06-11 14:08:04.812004] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:12.128 A controller has encountered a failure and is being reset. 00:40:12.128 [2024-06-11 14:08:04.812119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe04f70 (9): Bad file descriptor 00:40:12.128 Controller properly reset. 00:40:12.128 Initializing NVMe Controllers 00:40:12.128 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:12.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:12.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:12.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:12.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:12.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:12.128 Initialization complete. Launching workers. 00:40:12.128 Starting thread on core 1 00:40:12.128 Starting thread on core 2 00:40:12.128 Starting thread on core 3 00:40:12.128 Starting thread on core 0 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:12.128 00:40:12.128 real 0m11.589s 00:40:12.128 user 0m20.902s 00:40:12.128 sys 0m4.799s 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:12.128 ************************************ 00:40:12.128 END TEST nvmf_target_disconnect_tc2 00:40:12.128 ************************************ 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:12.128 rmmod nvme_tcp 00:40:12.128 rmmod nvme_fabrics 00:40:12.128 rmmod nvme_keyring 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1677209 ']' 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1677209 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1677209 ']' 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1677209 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:12.128 14:08:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1677209 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1677209' 00:40:12.388 killing process with pid 1677209 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1677209 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1677209 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:12.388 14:08:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.926 14:08:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:14.926 00:40:14.926 real 0m21.491s 00:40:14.926 user 0m48.969s 00:40:14.926 sys 0m10.725s 00:40:14.926 14:08:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:14.926 14:08:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:14.926 ************************************ 00:40:14.926 END TEST nvmf_target_disconnect 00:40:14.926 ************************************ 00:40:14.926 14:08:07 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:40:14.926 14:08:07 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:14.926 14:08:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.926 14:08:07 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:40:14.926 00:40:14.926 real 30m29.110s 00:40:14.926 user 76m10.181s 00:40:14.926 sys 9m36.498s 00:40:14.926 14:08:07 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:14.926 14:08:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.926 ************************************ 00:40:14.926 END TEST nvmf_tcp 00:40:14.926 ************************************ 00:40:14.926 14:08:07 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:40:14.926 14:08:07 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:14.926 14:08:07 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:14.926 14:08:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:14.926 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:40:14.926 ************************************ 00:40:14.926 START TEST spdkcli_nvmf_tcp 00:40:14.926 ************************************ 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:14.926 * Looking for test storage... 00:40:14.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:14.926 14:08:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1678943 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1678943 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1678943 ']' 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:14.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:14.927 14:08:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.927 [2024-06-11 14:08:07.716323] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:14.927 [2024-06-11 14:08:07.716388] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678943 ] 00:40:14.927 EAL: No free 2048 kB hugepages reported on node 1 00:40:14.927 [2024-06-11 14:08:07.819114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:15.186 [2024-06-11 14:08:07.908424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.186 [2024-06-11 14:08:07.908430] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.754 14:08:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:15.754 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:15.754 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:15.754 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:15.754 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:15.754 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:15.754 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:15.754 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.754 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.754 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:15.754 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:15.754 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:15.754 ' 00:40:18.291 [2024-06-11 14:08:11.048535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.669 [2024-06-11 14:08:12.224668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:21.572 [2024-06-11 14:08:14.387502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:23.474 [2024-06-11 14:08:16.245626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:24.848 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:24.848 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:24.848 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:24.849 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:24.849 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:24.849 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:24.849 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:25.107 14:08:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:25.365 14:08:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:25.365 14:08:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:25.365 14:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:25.365 14:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:25.365 14:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.624 14:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:25.624 14:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:25.624 14:08:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.624 14:08:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:25.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:25.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:25.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:25.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:25.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:25.624 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:25.624 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:25.624 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:25.624 ' 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:30.947 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:30.947 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:30.947 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:30.947 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1678943 ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1678943' 00:40:30.947 killing process with pid 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1678943 ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1678943 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1678943 ']' 00:40:30.947 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1678943 00:40:30.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1678943) - No such process 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1678943 is not found' 00:40:30.948 Process with pid 1678943 is not found 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:30.948 00:40:30.948 real 0m16.083s 00:40:30.948 user 0m33.097s 00:40:30.948 sys 0m0.969s 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:30.948 14:08:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.948 ************************************ 00:40:30.948 END TEST spdkcli_nvmf_tcp 00:40:30.948 ************************************ 00:40:30.948 14:08:23 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:30.948 14:08:23 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:30.948 14:08:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:30.948 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:40:30.948 ************************************ 00:40:30.948 START TEST nvmf_identify_passthru 00:40:30.948 ************************************ 00:40:30.948 14:08:23 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:30.948 * Looking for test storage... 00:40:30.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:30.948 14:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:30.948 14:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:30.948 14:08:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.948 14:08:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.948 14:08:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:30.948 14:08:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:30.948 14:08:23 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:40:30.948 14:08:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:39.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:39.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:39.076 Found net devices under 0000:af:00.0: cvl_0_0 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:39.076 Found net devices under 0000:af:00.1: cvl_0_1 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:39.076 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:39.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:39.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:40:39.077 00:40:39.077 --- 10.0.0.2 ping statistics --- 00:40:39.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.077 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:39.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:39.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:40:39.077 00:40:39.077 --- 10.0.0.1 ping statistics --- 00:40:39.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.077 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:39.077 14:08:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:40:39.077 14:08:30 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:d8:00.0 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:39.077 14:08:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:39.077 EAL: No free 2048 kB hugepages reported on node 1 00:40:43.269 14:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:40:43.269 14:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:40:43.269 14:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:43.269 14:08:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:43.269 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1686414 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:48.545 14:08:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1686414 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1686414 ']' 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:48.545 14:08:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 [2024-06-11 14:08:40.643135] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:48.545 [2024-06-11 14:08:40.643199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.545 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.545 [2024-06-11 14:08:40.749457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:48.545 [2024-06-11 14:08:40.837552] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.545 [2024-06-11 14:08:40.837594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.545 [2024-06-11 14:08:40.837607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.545 [2024-06-11 14:08:40.837619] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.545 [2024-06-11 14:08:40.837628] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.545 [2024-06-11 14:08:40.837683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.545 [2024-06-11 14:08:40.837773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.545 [2024-06-11 14:08:40.838282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.545 [2024-06-11 14:08:40.838283] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:40:48.805 14:08:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 INFO: Log level set to 20 00:40:48.805 INFO: Requests: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "method": "nvmf_set_config", 00:40:48.805 "id": 1, 00:40:48.805 "params": { 00:40:48.805 "admin_cmd_passthru": { 00:40:48.805 "identify_ctrlr": true 00:40:48.805 } 00:40:48.805 } 00:40:48.805 } 00:40:48.805 00:40:48.805 INFO: response: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "id": 1, 00:40:48.805 "result": true 00:40:48.805 } 00:40:48.805 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 INFO: Setting log level to 20 00:40:48.805 INFO: Setting log level to 20 00:40:48.805 INFO: Log level set to 20 00:40:48.805 INFO: Log level set to 20 00:40:48.805 INFO: Requests: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "method": "framework_start_init", 00:40:48.805 "id": 1 00:40:48.805 } 00:40:48.805 00:40:48.805 INFO: Requests: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "method": "framework_start_init", 00:40:48.805 "id": 1 00:40:48.805 } 00:40:48.805 00:40:48.805 [2024-06-11 14:08:41.600054] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:48.805 INFO: response: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "id": 1, 00:40:48.805 "result": true 00:40:48.805 } 00:40:48.805 00:40:48.805 INFO: response: 00:40:48.805 { 00:40:48.805 "jsonrpc": "2.0", 00:40:48.805 "id": 1, 00:40:48.805 "result": true 00:40:48.805 } 00:40:48.805 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 INFO: Setting log level to 40 00:40:48.805 INFO: Setting log level to 40 00:40:48.805 INFO: Setting log level to 40 00:40:48.805 [2024-06-11 14:08:41.613840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 14:08:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.099 Nvme0n1 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.099 [2024-06-11 14:08:44.555062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.099 [ 00:40:52.099 { 00:40:52.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:52.099 "subtype": "Discovery", 00:40:52.099 "listen_addresses": [], 00:40:52.099 "allow_any_host": true, 00:40:52.099 "hosts": [] 00:40:52.099 }, 00:40:52.099 { 00:40:52.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:52.099 "subtype": "NVMe", 00:40:52.099 "listen_addresses": [ 00:40:52.099 { 00:40:52.099 "trtype": "TCP", 00:40:52.099 "adrfam": "IPv4", 00:40:52.099 "traddr": "10.0.0.2", 00:40:52.099 "trsvcid": "4420" 00:40:52.099 } 00:40:52.099 ], 00:40:52.099 "allow_any_host": true, 00:40:52.099 "hosts": [], 00:40:52.099 "serial_number": "SPDK00000000000001", 00:40:52.099 "model_number": "SPDK bdev Controller", 00:40:52.099 "max_namespaces": 1, 00:40:52.099 "min_cntlid": 1, 00:40:52.099 "max_cntlid": 65519, 00:40:52.099 "namespaces": [ 00:40:52.099 { 00:40:52.099 "nsid": 1, 00:40:52.099 "bdev_name": "Nvme0n1", 00:40:52.099 "name": "Nvme0n1", 00:40:52.099 "nguid": "5A0AAD0573E24AEFA186B183878AAF5A", 00:40:52.099 "uuid": "5a0aad05-73e2-4aef-a186-b183878aaf5a" 00:40:52.099 } 00:40:52.099 ] 00:40:52.099 } 00:40:52.099 ] 00:40:52.099 14:08:44 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:52.099 EAL: No free 2048 kB hugepages reported on node 1 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:52.099 14:08:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:52.099 EAL: No free 2048 kB hugepages reported on node 1 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:52.359 14:08:45 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:52.359 rmmod nvme_tcp 00:40:52.359 rmmod nvme_fabrics 00:40:52.359 rmmod nvme_keyring 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1686414 ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1686414 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1686414 ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1686414 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1686414 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1686414' 00:40:52.359 killing process with pid 1686414 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1686414 00:40:52.359 14:08:45 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1686414 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:54.896 14:08:47 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:54.896 14:08:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:54.896 14:08:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:56.803 14:08:49 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:56.803 00:40:56.803 real 0m25.660s 00:40:56.803 user 0m34.208s 00:40:56.803 sys 0m6.897s 00:40:56.803 14:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:56.803 14:08:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:56.803 ************************************ 00:40:56.803 END TEST nvmf_identify_passthru 00:40:56.803 ************************************ 00:40:56.803 14:08:49 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:56.803 14:08:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:56.803 14:08:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:56.803 14:08:49 -- common/autotest_common.sh@10 -- # set +x 00:40:56.803 ************************************ 00:40:56.803 START TEST nvmf_dif 00:40:56.803 ************************************ 00:40:56.803 14:08:49 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:56.803 * Looking for test storage... 00:40:56.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:56.803 14:08:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:56.803 14:08:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:56.803 14:08:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:56.803 14:08:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.803 14:08:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.803 14:08:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.803 14:08:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:56.803 14:08:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:56.803 14:08:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:56.803 14:08:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:56.803 14:08:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:56.803 14:08:49 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:40:56.803 14:08:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:03.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:03.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.415 14:08:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:03.416 Found net devices under 0000:af:00.0: cvl_0_0 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:03.416 Found net devices under 0000:af:00.1: cvl_0_1 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:03.416 14:08:55 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:03.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:03.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:41:03.416 00:41:03.416 --- 10.0.0.2 ping statistics --- 00:41:03.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:03.416 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:03.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:03.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:41:03.416 00:41:03.416 --- 10.0.0.1 ping statistics --- 00:41:03.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:03.416 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:03.416 14:08:56 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:06.701 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:06.701 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:06.701 14:08:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:06.701 14:08:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1692241 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1692241 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1692241 ']' 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:06.701 14:08:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.701 14:08:59 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:06.701 [2024-06-11 14:08:59.517922] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:41:06.701 [2024-06-11 14:08:59.517981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.701 EAL: No free 2048 kB hugepages reported on node 1 00:41:06.960 [2024-06-11 14:08:59.624069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.960 [2024-06-11 14:08:59.709325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.960 [2024-06-11 14:08:59.709363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.960 [2024-06-11 14:08:59.709376] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.960 [2024-06-11 14:08:59.709388] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.960 [2024-06-11 14:08:59.709398] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.960 [2024-06-11 14:08:59.709432] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.526 14:09:00 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:07.526 14:09:00 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:41:07.526 14:09:00 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:07.526 14:09:00 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:07.526 14:09:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 14:09:00 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.785 14:09:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:07.785 14:09:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 [2024-06-11 14:09:00.462021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.785 14:09:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:07.785 14:09:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 ************************************ 00:41:07.785 START TEST fio_dif_1_default 00:41:07.785 ************************************ 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 bdev_null0 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.785 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:07.786 [2024-06-11 14:09:00.534364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:07.786 { 00:41:07.786 "params": { 00:41:07.786 "name": "Nvme$subsystem", 00:41:07.786 "trtype": "$TEST_TRANSPORT", 00:41:07.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.786 "adrfam": "ipv4", 00:41:07.786 "trsvcid": "$NVMF_PORT", 00:41:07.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.786 "hdgst": ${hdgst:-false}, 00:41:07.786 "ddgst": ${ddgst:-false} 00:41:07.786 }, 00:41:07.786 "method": "bdev_nvme_attach_controller" 00:41:07.786 } 00:41:07.786 EOF 00:41:07.786 )") 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:07.786 "params": { 00:41:07.786 "name": "Nvme0", 00:41:07.786 "trtype": "tcp", 00:41:07.786 "traddr": "10.0.0.2", 00:41:07.786 "adrfam": "ipv4", 00:41:07.786 "trsvcid": "4420", 00:41:07.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:07.786 "hdgst": false, 00:41:07.786 "ddgst": false 00:41:07.786 }, 00:41:07.786 "method": "bdev_nvme_attach_controller" 00:41:07.786 }' 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:07.786 14:09:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.044 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:08.044 fio-3.35 00:41:08.044 Starting 1 thread 00:41:08.303 EAL: No free 2048 kB hugepages reported on node 1 00:41:20.501 00:41:20.501 filename0: (groupid=0, jobs=1): err= 0: pid=1692940: Tue Jun 11 14:09:11 2024 00:41:20.501 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10026msec) 00:41:20.501 slat (nsec): min=8095, max=33858, avg=8438.81, stdev=1471.98 00:41:20.501 clat (usec): min=40891, max=46208, avg=41575.34, stdev=591.15 00:41:20.501 lat (usec): min=40899, max=46233, avg=41583.78, stdev=591.38 00:41:20.501 clat percentiles (usec): 00:41:20.501 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:20.501 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:41:20.501 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:20.501 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:41:20.501 | 99.99th=[46400] 00:41:20.501 bw ( KiB/s): min= 352, max= 416, per=99.84%, avg=384.00, stdev=10.38, samples=20 00:41:20.501 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:41:20.501 lat (msec) : 50=100.00% 00:41:20.501 cpu : usr=84.92%, sys=14.77%, ctx=10, majf=0, minf=224 00:41:20.501 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.501 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.501 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:20.501 00:41:20.501 Run status group 0 (all jobs): 00:41:20.501 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10026-10026msec 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.501 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.501 00:41:20.501 real 0m11.303s 00:41:20.501 user 0m20.372s 00:41:20.501 sys 0m1.851s 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 ************************************ 00:41:20.502 END TEST fio_dif_1_default 00:41:20.502 ************************************ 00:41:20.502 14:09:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:20.502 14:09:11 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:20.502 14:09:11 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 ************************************ 00:41:20.502 START TEST fio_dif_1_multi_subsystems 00:41:20.502 ************************************ 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 bdev_null0 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 [2024-06-11 14:09:11.923652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 bdev_null1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:20.502 { 00:41:20.502 "params": { 00:41:20.502 "name": "Nvme$subsystem", 00:41:20.502 "trtype": "$TEST_TRANSPORT", 00:41:20.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.502 "adrfam": "ipv4", 00:41:20.502 "trsvcid": "$NVMF_PORT", 00:41:20.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.502 "hdgst": ${hdgst:-false}, 00:41:20.502 "ddgst": ${ddgst:-false} 00:41:20.502 }, 00:41:20.502 "method": "bdev_nvme_attach_controller" 00:41:20.502 } 00:41:20.502 EOF 00:41:20.502 )") 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:20.502 { 00:41:20.502 "params": { 00:41:20.502 "name": "Nvme$subsystem", 00:41:20.502 "trtype": "$TEST_TRANSPORT", 00:41:20.502 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.502 "adrfam": "ipv4", 00:41:20.502 "trsvcid": "$NVMF_PORT", 00:41:20.502 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.502 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.502 "hdgst": ${hdgst:-false}, 00:41:20.502 "ddgst": ${ddgst:-false} 00:41:20.502 }, 00:41:20.502 "method": "bdev_nvme_attach_controller" 00:41:20.502 } 00:41:20.502 EOF 00:41:20.502 )") 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:41:20.502 14:09:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:20.502 "params": { 00:41:20.502 "name": "Nvme0", 00:41:20.502 "trtype": "tcp", 00:41:20.502 "traddr": "10.0.0.2", 00:41:20.502 "adrfam": "ipv4", 00:41:20.502 "trsvcid": "4420", 00:41:20.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.502 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.502 "hdgst": false, 00:41:20.502 "ddgst": false 00:41:20.502 }, 00:41:20.502 "method": "bdev_nvme_attach_controller" 00:41:20.502 },{ 00:41:20.502 "params": { 00:41:20.502 "name": "Nvme1", 00:41:20.502 "trtype": "tcp", 00:41:20.502 "traddr": "10.0.0.2", 00:41:20.502 "adrfam": "ipv4", 00:41:20.502 "trsvcid": "4420", 00:41:20.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.503 "hdgst": false, 00:41:20.503 "ddgst": false 00:41:20.503 }, 00:41:20.503 "method": "bdev_nvme_attach_controller" 00:41:20.503 }' 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.503 14:09:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.503 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.503 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.503 fio-3.35 00:41:20.503 Starting 2 threads 00:41:20.503 EAL: No free 2048 kB hugepages reported on node 1 00:41:30.472 00:41:30.472 filename0: (groupid=0, jobs=1): err= 0: pid=1695437: Tue Jun 11 14:09:23 2024 00:41:30.472 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10025msec) 00:41:30.472 slat (nsec): min=8226, max=33968, avg=10106.73, stdev=2924.39 00:41:30.472 clat (usec): min=40846, max=43227, avg=41738.66, stdev=480.47 00:41:30.472 lat (usec): min=40854, max=43256, avg=41748.77, stdev=480.69 00:41:30.472 clat percentiles (usec): 00:41:30.472 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:30.472 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:41:30.472 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:41:30.472 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:41:30.472 | 99.99th=[43254] 00:41:30.472 bw ( KiB/s): min= 352, max= 384, per=33.61%, avg=382.40, stdev= 7.16, samples=20 00:41:30.472 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:41:30.472 lat (msec) : 50=100.00% 00:41:30.472 cpu : usr=92.67%, sys=7.04%, ctx=14, majf=0, minf=130 00:41:30.472 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.472 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.472 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.472 filename1: (groupid=0, jobs=1): err= 0: pid=1695438: Tue Jun 11 14:09:23 2024 00:41:30.472 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10037msec) 00:41:30.472 slat (nsec): min=8206, max=34896, avg=9324.84, stdev=2317.23 00:41:30.472 clat (usec): min=553, max=42421, avg=21191.73, stdev=20340.08 00:41:30.472 lat (usec): min=561, max=42430, avg=21201.05, stdev=20339.37 00:41:30.472 clat percentiles (usec): 00:41:30.472 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 611], 00:41:30.472 | 30.00th=[ 840], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:41:30.472 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:41:30.472 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:30.472 | 99.99th=[42206] 00:41:30.472 bw ( KiB/s): min= 704, max= 768, per=66.43%, avg=755.20, stdev=24.13, samples=20 00:41:30.472 iops : min= 176, max= 192, avg=188.80, stdev= 6.03, samples=20 00:41:30.472 lat (usec) : 750=27.48%, 1000=21.56% 00:41:30.472 lat (msec) : 2=0.63%, 50=50.32% 00:41:30.472 cpu : usr=93.02%, sys=6.70%, ctx=12, majf=0, minf=107 00:41:30.472 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.472 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.472 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:30.472 00:41:30.473 Run status group 0 (all jobs): 00:41:30.473 READ: bw=1137KiB/s (1164kB/s), 383KiB/s-754KiB/s (392kB/s-772kB/s), io=11.1MiB (11.7MB), run=10025-10037msec 00:41:30.731 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 00:41:30.732 real 0m11.620s 00:41:30.732 user 0m29.721s 00:41:30.732 sys 0m1.806s 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 ************************************ 00:41:30.732 END TEST fio_dif_1_multi_subsystems 00:41:30.732 ************************************ 00:41:30.732 14:09:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:30.732 14:09:23 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:30.732 14:09:23 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 ************************************ 00:41:30.732 START TEST fio_dif_rand_params 00:41:30.732 ************************************ 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 bdev_null0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:30.732 [2024-06-11 14:09:23.624204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:30.732 { 00:41:30.732 "params": { 00:41:30.732 "name": "Nvme$subsystem", 00:41:30.732 "trtype": "$TEST_TRANSPORT", 00:41:30.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:30.732 "adrfam": "ipv4", 00:41:30.732 "trsvcid": "$NVMF_PORT", 00:41:30.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:30.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:30.732 "hdgst": ${hdgst:-false}, 00:41:30.732 "ddgst": ${ddgst:-false} 00:41:30.732 }, 00:41:30.732 "method": "bdev_nvme_attach_controller" 00:41:30.732 } 00:41:30.732 EOF 00:41:30.732 )") 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:30.732 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:30.991 "params": { 00:41:30.991 "name": "Nvme0", 00:41:30.991 "trtype": "tcp", 00:41:30.991 "traddr": "10.0.0.2", 00:41:30.991 "adrfam": "ipv4", 00:41:30.991 "trsvcid": "4420", 00:41:30.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:30.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:30.991 "hdgst": false, 00:41:30.991 "ddgst": false 00:41:30.991 }, 00:41:30.991 "method": "bdev_nvme_attach_controller" 00:41:30.991 }' 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:30.991 14:09:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.249 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:31.249 ... 00:41:31.249 fio-3.35 00:41:31.249 Starting 3 threads 00:41:31.249 EAL: No free 2048 kB hugepages reported on node 1 00:41:37.817 00:41:37.817 filename0: (groupid=0, jobs=1): err= 0: pid=1697356: Tue Jun 11 14:09:29 2024 00:41:37.817 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(133MiB/5004msec) 00:41:37.817 slat (usec): min=8, max=148, avg=14.38, stdev= 6.60 00:41:37.817 clat (usec): min=4550, max=57128, avg=14127.35, stdev=13201.70 00:41:37.817 lat (usec): min=4558, max=57147, avg=14141.73, stdev=13202.23 00:41:37.817 clat percentiles (usec): 00:41:37.817 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 7439], 00:41:37.817 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10814], 00:41:37.817 | 70.00th=[12256], 80.00th=[13566], 90.00th=[48497], 95.00th=[52691], 00:41:37.817 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56361], 99.95th=[56886], 00:41:37.817 | 99.99th=[56886] 00:41:37.817 bw ( KiB/s): min=21504, max=36096, per=32.99%, avg=26339.56, stdev=4301.72, samples=9 00:41:37.817 iops : min= 168, max= 282, avg=205.78, stdev=33.61, samples=9 00:41:37.817 lat (msec) : 10=52.87%, 20=36.95%, 50=1.04%, 100=9.14% 00:41:37.817 cpu : usr=90.47%, sys=8.26%, ctx=287, majf=0, minf=143 00:41:37.817 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.817 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.817 filename0: (groupid=0, jobs=1): err= 0: pid=1697357: Tue Jun 11 14:09:29 2024 00:41:37.817 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(125MiB/5046msec) 00:41:37.817 slat (nsec): min=8291, max=29955, avg=13466.16, stdev=4641.58 00:41:37.817 clat (usec): min=5321, max=95658, avg=15093.53, stdev=14305.90 00:41:37.817 lat (usec): min=5330, max=95667, avg=15106.99, stdev=14306.16 00:41:37.817 clat percentiles (usec): 00:41:37.817 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 8291], 00:41:37.817 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11338], 00:41:37.817 | 70.00th=[12387], 80.00th=[13566], 90.00th=[50594], 95.00th=[52691], 00:41:37.817 | 99.00th=[55837], 99.50th=[56886], 99.90th=[95945], 99.95th=[95945], 00:41:37.817 | 99.99th=[95945] 00:41:37.817 bw ( KiB/s): min=20736, max=36352, per=31.97%, avg=25523.20, stdev=4620.70, samples=10 00:41:37.817 iops : min= 162, max= 284, avg=199.40, stdev=36.10, samples=10 00:41:37.817 lat (msec) : 10=48.95%, 20=39.44%, 50=1.50%, 100=10.11% 00:41:37.817 cpu : usr=93.18%, sys=6.42%, ctx=6, majf=0, minf=76 00:41:37.817 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.817 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.817 filename0: (groupid=0, jobs=1): err= 0: pid=1697358: Tue Jun 11 14:09:29 2024 00:41:37.817 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(136MiB/5038msec) 00:41:37.817 slat (nsec): min=6404, max=38935, avg=15329.48, stdev=6875.95 00:41:37.817 clat (usec): min=4364, max=95198, avg=13883.87, stdev=14055.61 00:41:37.817 lat (usec): min=4374, max=95207, avg=13899.20, stdev=14055.85 00:41:37.817 clat percentiles (usec): 00:41:37.817 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 7242], 00:41:37.817 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10290], 00:41:37.817 | 70.00th=[11207], 80.00th=[12125], 90.00th=[49021], 95.00th=[51119], 00:41:37.817 | 99.00th=[54264], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:41:37.817 | 99.99th=[94897] 00:41:37.817 bw ( KiB/s): min=17664, max=35072, per=34.77%, avg=27756.50, stdev=6266.03, samples=10 00:41:37.817 iops : min= 138, max= 274, avg=216.80, stdev=48.93, samples=10 00:41:37.817 lat (msec) : 10=57.77%, 20=30.91%, 50=3.68%, 100=7.64% 00:41:37.817 cpu : usr=92.97%, sys=6.65%, ctx=10, majf=0, minf=79 00:41:37.817 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.817 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.817 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:37.817 00:41:37.817 Run status group 0 (all jobs): 00:41:37.817 READ: bw=78.0MiB/s (81.7MB/s), 24.7MiB/s-27.0MiB/s (25.9MB/s-28.3MB/s), io=393MiB (412MB), run=5004-5046msec 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.817 bdev_null0 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.817 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 [2024-06-11 14:09:29.852729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 bdev_null1 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 bdev_null2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:37.818 { 00:41:37.818 "params": { 00:41:37.818 "name": "Nvme$subsystem", 00:41:37.818 "trtype": "$TEST_TRANSPORT", 00:41:37.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.818 "adrfam": "ipv4", 00:41:37.818 "trsvcid": "$NVMF_PORT", 00:41:37.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.818 "hdgst": ${hdgst:-false}, 00:41:37.818 "ddgst": ${ddgst:-false} 00:41:37.818 }, 00:41:37.818 "method": "bdev_nvme_attach_controller" 00:41:37.818 } 00:41:37.818 EOF 00:41:37.818 )") 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:37.818 { 00:41:37.818 "params": { 00:41:37.818 "name": "Nvme$subsystem", 00:41:37.818 "trtype": "$TEST_TRANSPORT", 00:41:37.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.818 "adrfam": "ipv4", 00:41:37.818 "trsvcid": "$NVMF_PORT", 00:41:37.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.818 "hdgst": ${hdgst:-false}, 00:41:37.818 "ddgst": ${ddgst:-false} 00:41:37.818 }, 00:41:37.818 "method": "bdev_nvme_attach_controller" 00:41:37.818 } 00:41:37.818 EOF 00:41:37.818 )") 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:37.818 { 00:41:37.818 "params": { 00:41:37.818 "name": "Nvme$subsystem", 00:41:37.818 "trtype": "$TEST_TRANSPORT", 00:41:37.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:37.818 "adrfam": "ipv4", 00:41:37.818 "trsvcid": "$NVMF_PORT", 00:41:37.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:37.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:37.818 "hdgst": ${hdgst:-false}, 00:41:37.818 "ddgst": ${ddgst:-false} 00:41:37.818 }, 00:41:37.818 "method": "bdev_nvme_attach_controller" 00:41:37.818 } 00:41:37.818 EOF 00:41:37.818 )") 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:37.818 14:09:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:37.818 "params": { 00:41:37.818 "name": "Nvme0", 00:41:37.818 "trtype": "tcp", 00:41:37.818 "traddr": "10.0.0.2", 00:41:37.818 "adrfam": "ipv4", 00:41:37.818 "trsvcid": "4420", 00:41:37.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:37.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:37.818 "hdgst": false, 00:41:37.818 "ddgst": false 00:41:37.818 }, 00:41:37.818 "method": "bdev_nvme_attach_controller" 00:41:37.818 },{ 00:41:37.818 "params": { 00:41:37.818 "name": "Nvme1", 00:41:37.818 "trtype": "tcp", 00:41:37.818 "traddr": "10.0.0.2", 00:41:37.819 "adrfam": "ipv4", 00:41:37.819 "trsvcid": "4420", 00:41:37.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:37.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:37.819 "hdgst": false, 00:41:37.819 "ddgst": false 00:41:37.819 }, 00:41:37.819 "method": "bdev_nvme_attach_controller" 00:41:37.819 },{ 00:41:37.819 "params": { 00:41:37.819 "name": "Nvme2", 00:41:37.819 "trtype": "tcp", 00:41:37.819 "traddr": "10.0.0.2", 00:41:37.819 "adrfam": "ipv4", 00:41:37.819 "trsvcid": "4420", 00:41:37.819 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:37.819 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:37.819 "hdgst": false, 00:41:37.819 "ddgst": false 00:41:37.819 }, 00:41:37.819 "method": "bdev_nvme_attach_controller" 00:41:37.819 }' 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:37.819 14:09:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:37.819 14:09:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:37.819 14:09:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:37.819 14:09:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:37.819 14:09:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:37.819 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.819 ... 00:41:37.819 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.819 ... 00:41:37.819 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:37.819 ... 00:41:37.819 fio-3.35 00:41:37.819 Starting 24 threads 00:41:37.819 EAL: No free 2048 kB hugepages reported on node 1 00:41:50.035 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698525: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:41:50.035 slat (nsec): min=8397, max=38979, avg=12379.74, stdev=4074.49 00:41:50.035 clat (usec): min=21853, max=72788, avg=33472.80, stdev=3490.30 00:41:50.035 lat (usec): min=21863, max=72826, avg=33485.18, stdev=3490.79 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[22152], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.035 | 99.00th=[44827], 99.50th=[44827], 99.90th=[72877], 99.95th=[72877], 00:41:50.035 | 99.99th=[72877] 00:41:50.035 bw ( KiB/s): min= 1667, max= 1952, per=4.15%, avg=1906.55, stdev=57.68, samples=20 00:41:50.035 iops : min= 416, max= 488, avg=476.60, stdev=14.58, samples=20 00:41:50.035 lat (msec) : 50=99.66%, 100=0.34% 00:41:50.035 cpu : usr=97.34%, sys=2.26%, ctx=14, majf=0, minf=44 00:41:50.035 IO depths : 1=3.9%, 2=9.3%, 4=24.1%, 8=54.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698526: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:41:50.035 slat (nsec): min=9064, max=46160, avg=21966.01, stdev=5519.44 00:41:50.035 clat (usec): min=27104, max=61578, avg=33295.77, stdev=1680.26 00:41:50.035 lat (usec): min=27121, max=61598, avg=33317.73, stdev=1680.24 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.035 | 99.00th=[33817], 99.50th=[35390], 99.90th=[61604], 99.95th=[61604], 00:41:50.035 | 99.99th=[61604] 00:41:50.035 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.35, stdev=38.94, samples=20 00:41:50.035 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.035 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.035 cpu : usr=97.14%, sys=2.47%, ctx=15, majf=0, minf=37 00:41:50.035 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698527: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:41:50.035 slat (nsec): min=6299, max=40434, avg=18214.43, stdev=4760.31 00:41:50.035 clat (usec): min=20343, max=51627, avg=33287.60, stdev=1343.38 00:41:50.035 lat (usec): min=20352, max=51644, avg=33305.81, stdev=1343.08 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.035 | 99.00th=[33817], 99.50th=[35390], 99.90th=[51643], 99.95th=[51643], 00:41:50.035 | 99.99th=[51643] 00:41:50.035 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.53, stdev=40.36, samples=19 00:41:50.035 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:41:50.035 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.035 cpu : usr=97.37%, sys=2.24%, ctx=14, majf=0, minf=43 00:41:50.035 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698528: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10012msec) 00:41:50.035 slat (nsec): min=6484, max=42995, avg=18697.48, stdev=4835.13 00:41:50.035 clat (usec): min=20369, max=75391, avg=33310.99, stdev=2059.61 00:41:50.035 lat (usec): min=20384, max=75408, avg=33329.69, stdev=2059.20 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.035 | 99.00th=[34341], 99.50th=[44827], 99.90th=[57934], 99.95th=[74974], 00:41:50.035 | 99.99th=[74974] 00:41:50.035 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.20, stdev=39.40, samples=20 00:41:50.035 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.035 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.035 cpu : usr=97.32%, sys=2.30%, ctx=14, majf=0, minf=52 00:41:50.035 IO depths : 1=6.2%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698530: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10006msec) 00:41:50.035 slat (nsec): min=7558, max=49416, avg=22744.95, stdev=6140.17 00:41:50.035 clat (usec): min=5495, max=35306, avg=32932.29, stdev=2473.68 00:41:50.035 lat (usec): min=5506, max=35321, avg=32955.03, stdev=2474.32 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[23725], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.035 | 99.00th=[33817], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:41:50.035 | 99.99th=[35390] 00:41:50.035 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=1933.47, stdev=58.73, samples=19 00:41:50.035 iops : min= 480, max= 544, avg=483.37, stdev=14.68, samples=19 00:41:50.035 lat (msec) : 10=0.60%, 20=0.39%, 50=99.01% 00:41:50.035 cpu : usr=97.25%, sys=2.34%, ctx=16, majf=0, minf=47 00:41:50.035 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698531: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=484, BW=1937KiB/s (1983kB/s)(18.9MiB/10012msec) 00:41:50.035 slat (nsec): min=8703, max=48074, avg=15500.17, stdev=5649.75 00:41:50.035 clat (usec): min=3207, max=35286, avg=32915.50, stdev=2989.54 00:41:50.035 lat (usec): min=3216, max=35301, avg=32931.00, stdev=2989.41 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[ 8848], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.035 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:50.035 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.035 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:41:50.035 | 99.99th=[35390] 00:41:50.035 bw ( KiB/s): min= 1792, max= 2304, per=4.21%, avg=1932.80, stdev=91.93, samples=20 00:41:50.035 iops : min= 448, max= 576, avg=483.20, stdev=22.98, samples=20 00:41:50.035 lat (msec) : 4=0.33%, 10=0.80%, 20=0.14%, 50=98.72% 00:41:50.035 cpu : usr=97.42%, sys=2.18%, ctx=15, majf=0, minf=52 00:41:50.035 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.035 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.035 filename0: (groupid=0, jobs=1): err= 0: pid=1698532: Tue Jun 11 14:09:41 2024 00:41:50.035 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:41:50.035 slat (usec): min=8, max=101, avg=35.64, stdev=17.51 00:41:50.035 clat (usec): min=24698, max=61159, avg=33149.81, stdev=1294.42 00:41:50.035 lat (usec): min=24708, max=61182, avg=33185.45, stdev=1293.17 00:41:50.035 clat percentiles (usec): 00:41:50.035 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:50.035 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:50.035 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.035 | 99.00th=[33817], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:41:50.035 | 99.99th=[61080] 00:41:50.035 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.35, stdev=38.94, samples=20 00:41:50.036 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.036 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.036 cpu : usr=97.48%, sys=1.96%, ctx=49, majf=0, minf=72 00:41:50.036 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename0: (groupid=0, jobs=1): err= 0: pid=1698533: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:41:50.036 slat (nsec): min=9001, max=43845, avg=19021.88, stdev=5656.70 00:41:50.036 clat (usec): min=8582, max=35287, avg=32985.01, stdev=2383.37 00:41:50.036 lat (usec): min=8610, max=35303, avg=33004.03, stdev=2383.43 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.036 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:50.036 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:41:50.036 | 99.99th=[35390] 00:41:50.036 bw ( KiB/s): min= 1792, max= 2176, per=4.20%, avg=1926.40, stdev=65.33, samples=20 00:41:50.036 iops : min= 448, max= 544, avg=481.60, stdev=16.33, samples=20 00:41:50.036 lat (msec) : 10=0.66%, 20=0.33%, 50=99.01% 00:41:50.036 cpu : usr=97.22%, sys=2.39%, ctx=7, majf=0, minf=39 00:41:50.036 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698534: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10010msec) 00:41:50.036 slat (nsec): min=6409, max=51993, avg=12593.65, stdev=5876.71 00:41:50.036 clat (usec): min=26165, max=67629, avg=33371.35, stdev=1343.27 00:41:50.036 lat (usec): min=26175, max=67643, avg=33383.94, stdev=1343.02 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.036 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:41:50.036 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:41:50.036 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:41:50.036 | 99.99th=[67634] 00:41:50.036 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.20, stdev=39.40, samples=20 00:41:50.036 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.036 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.036 cpu : usr=97.52%, sys=2.09%, ctx=18, majf=0, minf=66 00:41:50.036 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698535: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:41:50.036 slat (nsec): min=4413, max=76812, avg=39973.23, stdev=13898.42 00:41:50.036 clat (usec): min=16655, max=62814, avg=33109.22, stdev=1990.21 00:41:50.036 lat (usec): min=16675, max=62827, avg=33149.19, stdev=1988.95 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:50.036 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:50.036 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[33817], 99.50th=[34866], 99.90th=[62653], 99.95th=[62653], 00:41:50.036 | 99.99th=[62653] 00:41:50.036 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.53, stdev=35.18, samples=19 00:41:50.036 iops : min= 448, max= 480, avg=476.53, stdev= 8.89, samples=19 00:41:50.036 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:41:50.036 cpu : usr=97.23%, sys=2.36%, ctx=14, majf=0, minf=36 00:41:50.036 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698536: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:41:50.036 slat (nsec): min=6390, max=79140, avg=40300.89, stdev=14228.99 00:41:50.036 clat (usec): min=16632, max=67542, avg=33128.00, stdev=2406.08 00:41:50.036 lat (usec): min=16641, max=67558, avg=33168.30, stdev=2405.15 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:50.036 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:50.036 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[39584], 99.50th=[40633], 99.90th=[67634], 99.95th=[67634], 00:41:50.036 | 99.99th=[67634] 00:41:50.036 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:41:50.036 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:41:50.036 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:41:50.036 cpu : usr=97.38%, sys=2.23%, ctx=7, majf=0, minf=39 00:41:50.036 IO depths : 1=5.4%, 2=11.1%, 4=24.2%, 8=52.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698537: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=477, BW=1911KiB/s (1956kB/s)(18.7MiB/10016msec) 00:41:50.036 slat (nsec): min=8528, max=41669, avg=18443.85, stdev=4720.79 00:41:50.036 clat (usec): min=20269, max=64646, avg=33334.91, stdev=1965.07 00:41:50.036 lat (usec): min=20296, max=64671, avg=33353.36, stdev=1964.76 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.036 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:50.036 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[33817], 99.50th=[35390], 99.90th=[64750], 99.95th=[64750], 00:41:50.036 | 99.99th=[64750] 00:41:50.036 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1905.85, stdev=57.24, samples=20 00:41:50.036 iops : min= 416, max= 480, avg=476.45, stdev=14.31, samples=20 00:41:50.036 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.036 cpu : usr=97.23%, sys=2.37%, ctx=10, majf=0, minf=35 00:41:50.036 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698538: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:41:50.036 slat (nsec): min=8071, max=42534, avg=22039.35, stdev=5565.49 00:41:50.036 clat (usec): min=18608, max=61526, avg=33300.73, stdev=2357.45 00:41:50.036 lat (usec): min=18629, max=61541, avg=33322.77, stdev=2357.56 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[27395], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.036 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.036 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[36439], 99.50th=[47973], 99.90th=[61604], 99.95th=[61604], 00:41:50.036 | 99.99th=[61604] 00:41:50.036 bw ( KiB/s): min= 1792, max= 1936, per=4.15%, avg=1907.35, stdev=39.28, samples=20 00:41:50.036 iops : min= 448, max= 484, avg=476.80, stdev= 9.93, samples=20 00:41:50.036 lat (msec) : 20=0.63%, 50=99.00%, 100=0.38% 00:41:50.036 cpu : usr=97.08%, sys=2.52%, ctx=14, majf=0, minf=44 00:41:50.036 IO depths : 1=5.7%, 2=11.9%, 4=24.7%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:50.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.036 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.036 filename1: (groupid=0, jobs=1): err= 0: pid=1698539: Tue Jun 11 14:09:41 2024 00:41:50.036 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:41:50.036 slat (nsec): min=4749, max=80197, avg=33067.14, stdev=15165.97 00:41:50.036 clat (usec): min=28434, max=53496, avg=33232.22, stdev=1223.02 00:41:50.036 lat (usec): min=28479, max=53509, avg=33265.29, stdev=1220.45 00:41:50.036 clat percentiles (usec): 00:41:50.036 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:41:50.036 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.036 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.036 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:41:50.036 | 99.99th=[53740] 00:41:50.036 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.20, stdev=39.40, samples=20 00:41:50.036 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.036 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.037 cpu : usr=96.97%, sys=2.63%, ctx=13, majf=0, minf=42 00:41:50.037 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename1: (groupid=0, jobs=1): err= 0: pid=1698540: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:41:50.037 slat (nsec): min=5989, max=44788, avg=20126.09, stdev=6924.57 00:41:50.037 clat (usec): min=13171, max=80746, avg=33258.87, stdev=2487.20 00:41:50.037 lat (usec): min=13181, max=80763, avg=33279.00, stdev=2487.04 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.037 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.037 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.037 | 99.00th=[33817], 99.50th=[35390], 99.90th=[67634], 99.95th=[80217], 00:41:50.037 | 99.99th=[81265] 00:41:50.037 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1906.53, stdev=58.73, samples=19 00:41:50.037 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:41:50.037 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:41:50.037 cpu : usr=97.34%, sys=2.26%, ctx=10, majf=0, minf=46 00:41:50.037 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename1: (groupid=0, jobs=1): err= 0: pid=1698541: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=477, BW=1909KiB/s (1955kB/s)(18.6MiB/10003msec) 00:41:50.037 slat (nsec): min=5650, max=62669, avg=16397.35, stdev=6257.53 00:41:50.037 clat (usec): min=8578, max=88324, avg=33466.06, stdev=2757.77 00:41:50.037 lat (usec): min=8587, max=88339, avg=33482.46, stdev=2757.75 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.037 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:50.037 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.037 | 99.00th=[36963], 99.50th=[46924], 99.90th=[67634], 99.95th=[88605], 00:41:50.037 | 99.99th=[88605] 00:41:50.037 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1902.32, stdev=59.12, samples=19 00:41:50.037 iops : min= 416, max= 480, avg=475.58, stdev=14.78, samples=19 00:41:50.037 lat (msec) : 10=0.13%, 20=0.21%, 50=99.33%, 100=0.34% 00:41:50.037 cpu : usr=97.03%, sys=2.57%, ctx=13, majf=0, minf=34 00:41:50.037 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=80.9%, 16=18.6%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=89.5%, 8=10.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698542: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10003msec) 00:41:50.037 slat (nsec): min=5827, max=56168, avg=17090.77, stdev=7039.20 00:41:50.037 clat (usec): min=8135, max=67643, avg=33564.62, stdev=3013.68 00:41:50.037 lat (usec): min=8144, max=67659, avg=33581.71, stdev=3013.36 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[25822], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:41:50.037 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:41:50.037 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.037 | 99.00th=[47973], 99.50th=[52691], 99.90th=[67634], 99.95th=[67634], 00:41:50.037 | 99.99th=[67634] 00:41:50.037 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1898.11, stdev=60.62, samples=19 00:41:50.037 iops : min= 416, max= 480, avg=474.53, stdev=15.16, samples=19 00:41:50.037 lat (msec) : 10=0.08%, 20=0.50%, 50=98.91%, 100=0.50% 00:41:50.037 cpu : usr=97.24%, sys=2.36%, ctx=12, majf=0, minf=80 00:41:50.037 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=78.0%, 16=17.8%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=90.1%, 8=9.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698543: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:41:50.037 slat (nsec): min=4283, max=78746, avg=39449.14, stdev=14197.94 00:41:50.037 clat (usec): min=16696, max=61737, avg=33116.74, stdev=1936.74 00:41:50.037 lat (usec): min=16716, max=61749, avg=33156.18, stdev=1935.46 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:50.037 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:50.037 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.037 | 99.00th=[33817], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:41:50.037 | 99.99th=[61604] 00:41:50.037 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.68, stdev=39.89, samples=19 00:41:50.037 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:41:50.037 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:41:50.037 cpu : usr=96.83%, sys=2.77%, ctx=10, majf=0, minf=52 00:41:50.037 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698545: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:41:50.037 slat (nsec): min=6546, max=45127, avg=21710.54, stdev=5618.88 00:41:50.037 clat (usec): min=10371, max=35350, avg=33042.43, stdev=1756.61 00:41:50.037 lat (usec): min=10389, max=35364, avg=33064.14, stdev=1757.62 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[25297], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:41:50.037 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.037 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.037 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:41:50.037 | 99.99th=[35390] 00:41:50.037 bw ( KiB/s): min= 1920, max= 2048, per=4.20%, avg=1926.74, stdev=29.37, samples=19 00:41:50.037 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:41:50.037 lat (msec) : 20=0.66%, 50=99.34% 00:41:50.037 cpu : usr=96.95%, sys=2.64%, ctx=10, majf=0, minf=40 00:41:50.037 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698546: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:41:50.037 slat (nsec): min=8322, max=80460, avg=23838.41, stdev=12947.37 00:41:50.037 clat (usec): min=25139, max=68277, avg=33300.25, stdev=1617.81 00:41:50.037 lat (usec): min=25149, max=68292, avg=33324.09, stdev=1616.95 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[28705], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:41:50.037 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:41:50.037 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.037 | 99.00th=[39060], 99.50th=[39584], 99.90th=[53216], 99.95th=[67634], 00:41:50.037 | 99.99th=[68682] 00:41:50.037 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.35, stdev=38.94, samples=20 00:41:50.037 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.037 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.037 cpu : usr=96.91%, sys=2.68%, ctx=9, majf=0, minf=54 00:41:50.037 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698547: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:41:50.037 slat (nsec): min=8856, max=76781, avg=37242.80, stdev=15021.37 00:41:50.037 clat (usec): min=25166, max=68887, avg=33190.36, stdev=1379.68 00:41:50.037 lat (usec): min=25175, max=68904, avg=33227.60, stdev=1377.48 00:41:50.037 clat percentiles (usec): 00:41:50.037 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:50.037 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.037 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.037 | 99.00th=[33817], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:41:50.037 | 99.99th=[68682] 00:41:50.037 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.20, stdev=39.40, samples=20 00:41:50.037 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.037 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.037 cpu : usr=97.22%, sys=2.37%, ctx=11, majf=0, minf=39 00:41:50.037 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.037 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.037 filename2: (groupid=0, jobs=1): err= 0: pid=1698548: Tue Jun 11 14:09:41 2024 00:41:50.037 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10006msec) 00:41:50.037 slat (nsec): min=4226, max=79615, avg=40049.08, stdev=14130.24 00:41:50.037 clat (usec): min=16572, max=63854, avg=33116.38, stdev=2043.91 00:41:50.038 lat (usec): min=16588, max=63867, avg=33156.43, stdev=2042.49 00:41:50.038 clat percentiles (usec): 00:41:50.038 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:41:50.038 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:41:50.038 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.038 | 99.00th=[33817], 99.50th=[34866], 99.90th=[63701], 99.95th=[63701], 00:41:50.038 | 99.99th=[63701] 00:41:50.038 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1906.68, stdev=39.89, samples=19 00:41:50.038 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:41:50.038 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:41:50.038 cpu : usr=97.17%, sys=2.42%, ctx=12, majf=0, minf=48 00:41:50.038 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.038 filename2: (groupid=0, jobs=1): err= 0: pid=1698549: Tue Jun 11 14:09:41 2024 00:41:50.038 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10015msec) 00:41:50.038 slat (nsec): min=9360, max=91483, avg=35089.80, stdev=15622.55 00:41:50.038 clat (usec): min=26323, max=53460, avg=33209.96, stdev=1259.67 00:41:50.038 lat (usec): min=26333, max=53475, avg=33245.05, stdev=1257.96 00:41:50.038 clat percentiles (usec): 00:41:50.038 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:50.038 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.038 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33424], 00:41:50.038 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53216], 99.95th=[53216], 00:41:50.038 | 99.99th=[53216] 00:41:50.038 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1907.35, stdev=38.94, samples=20 00:41:50.038 iops : min= 448, max= 480, avg=476.80, stdev= 9.85, samples=20 00:41:50.038 lat (msec) : 50=99.67%, 100=0.33% 00:41:50.038 cpu : usr=96.88%, sys=2.53%, ctx=36, majf=0, minf=51 00:41:50.038 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:41:50.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.038 filename2: (groupid=0, jobs=1): err= 0: pid=1698550: Tue Jun 11 14:09:41 2024 00:41:50.038 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:41:50.038 slat (nsec): min=6174, max=84640, avg=37516.75, stdev=15235.29 00:41:50.038 clat (usec): min=16577, max=69062, avg=33187.86, stdev=2152.94 00:41:50.038 lat (usec): min=16592, max=69081, avg=33225.38, stdev=2151.27 00:41:50.038 clat percentiles (usec): 00:41:50.038 | 1.00th=[28705], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:41:50.038 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:41:50.038 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:41:50.038 | 99.00th=[35390], 99.50th=[50070], 99.90th=[56361], 99.95th=[68682], 00:41:50.038 | 99.99th=[68682] 00:41:50.038 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1907.35, stdev=56.57, samples=20 00:41:50.038 iops : min= 416, max= 480, avg=476.80, stdev=14.31, samples=20 00:41:50.038 lat (msec) : 20=0.33%, 50=99.25%, 100=0.42% 00:41:50.038 cpu : usr=97.43%, sys=2.16%, ctx=12, majf=0, minf=38 00:41:50.038 IO depths : 1=5.3%, 2=11.5%, 4=24.7%, 8=51.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:41:50.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.038 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.038 00:41:50.038 Run status group 0 (all jobs): 00:41:50.038 READ: bw=44.8MiB/s (47.0MB/s), 1903KiB/s-1937KiB/s (1948kB/s-1983kB/s), io=449MiB (471MB), run=10003-10016msec 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 bdev_null0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 [2024-06-11 14:09:41.458122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 bdev_null1 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.038 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:50.039 { 00:41:50.039 "params": { 00:41:50.039 "name": "Nvme$subsystem", 00:41:50.039 "trtype": "$TEST_TRANSPORT", 00:41:50.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.039 "adrfam": "ipv4", 00:41:50.039 "trsvcid": "$NVMF_PORT", 00:41:50.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.039 "hdgst": ${hdgst:-false}, 00:41:50.039 "ddgst": ${ddgst:-false} 00:41:50.039 }, 00:41:50.039 "method": "bdev_nvme_attach_controller" 00:41:50.039 } 00:41:50.039 EOF 00:41:50.039 )") 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:50.039 { 00:41:50.039 "params": { 00:41:50.039 "name": "Nvme$subsystem", 00:41:50.039 "trtype": "$TEST_TRANSPORT", 00:41:50.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.039 "adrfam": "ipv4", 00:41:50.039 "trsvcid": "$NVMF_PORT", 00:41:50.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.039 "hdgst": ${hdgst:-false}, 00:41:50.039 "ddgst": ${ddgst:-false} 00:41:50.039 }, 00:41:50.039 "method": "bdev_nvme_attach_controller" 00:41:50.039 } 00:41:50.039 EOF 00:41:50.039 )") 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:50.039 "params": { 00:41:50.039 "name": "Nvme0", 00:41:50.039 "trtype": "tcp", 00:41:50.039 "traddr": "10.0.0.2", 00:41:50.039 "adrfam": "ipv4", 00:41:50.039 "trsvcid": "4420", 00:41:50.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.039 "hdgst": false, 00:41:50.039 "ddgst": false 00:41:50.039 }, 00:41:50.039 "method": "bdev_nvme_attach_controller" 00:41:50.039 },{ 00:41:50.039 "params": { 00:41:50.039 "name": "Nvme1", 00:41:50.039 "trtype": "tcp", 00:41:50.039 "traddr": "10.0.0.2", 00:41:50.039 "adrfam": "ipv4", 00:41:50.039 "trsvcid": "4420", 00:41:50.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:50.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:50.039 "hdgst": false, 00:41:50.039 "ddgst": false 00:41:50.039 }, 00:41:50.039 "method": "bdev_nvme_attach_controller" 00:41:50.039 }' 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:50.039 14:09:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.039 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.039 ... 00:41:50.039 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.039 ... 00:41:50.039 fio-3.35 00:41:50.039 Starting 4 threads 00:41:50.039 EAL: No free 2048 kB hugepages reported on node 1 00:41:55.314 00:41:55.314 filename0: (groupid=0, jobs=1): err= 0: pid=1700478: Tue Jun 11 14:09:47 2024 00:41:55.314 read: IOPS=1989, BW=15.5MiB/s (16.3MB/s)(77.8MiB/5005msec) 00:41:55.314 slat (nsec): min=8246, max=31258, avg=11246.47, stdev=3108.32 00:41:55.314 clat (usec): min=1460, max=43437, avg=3989.52, stdev=1258.31 00:41:55.314 lat (usec): min=1470, max=43462, avg=4000.77, stdev=1258.19 00:41:55.314 clat percentiles (usec): 00:41:55.314 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3621], 00:41:55.314 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3916], 60.00th=[ 3982], 00:41:55.314 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4555], 95.00th=[ 5407], 00:41:55.314 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6915], 99.95th=[43254], 00:41:55.314 | 99.99th=[43254] 00:41:55.314 bw ( KiB/s): min=15297, max=16432, per=25.04%, avg=15918.50, stdev=384.67, samples=10 00:41:55.314 iops : min= 1912, max= 2054, avg=1989.80, stdev=48.11, samples=10 00:41:55.314 lat (msec) : 2=0.04%, 4=70.74%, 10=29.14%, 50=0.08% 00:41:55.314 cpu : usr=92.93%, sys=6.69%, ctx=11, majf=0, minf=0 00:41:55.314 IO depths : 1=0.2%, 2=1.5%, 4=69.6%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.314 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.314 issued rwts: total=9956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.314 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:55.314 filename0: (groupid=0, jobs=1): err= 0: pid=1700479: Tue Jun 11 14:09:47 2024 00:41:55.314 read: IOPS=2039, BW=15.9MiB/s (16.7MB/s)(79.7MiB/5002msec) 00:41:55.314 slat (nsec): min=7021, max=33045, avg=11382.14, stdev=3297.43 00:41:55.314 clat (usec): min=807, max=6713, avg=3889.93, stdev=717.16 00:41:55.314 lat (usec): min=816, max=6721, avg=3901.31, stdev=716.81 00:41:55.314 clat percentiles (usec): 00:41:55.314 | 1.00th=[ 2073], 5.00th=[ 2868], 10.00th=[ 3163], 20.00th=[ 3458], 00:41:55.315 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3884], 60.00th=[ 3949], 00:41:55.315 | 70.00th=[ 3982], 80.00th=[ 4047], 90.00th=[ 4817], 95.00th=[ 5473], 00:41:55.315 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6521], 99.95th=[ 6652], 00:41:55.315 | 99.99th=[ 6718] 00:41:55.315 bw ( KiB/s): min=15328, max=17696, per=25.66%, avg=16315.20, stdev=850.43, samples=10 00:41:55.315 iops : min= 1916, max= 2212, avg=2039.40, stdev=106.30, samples=10 00:41:55.315 lat (usec) : 1000=0.01% 00:41:55.315 lat (msec) : 2=0.70%, 4=74.25%, 10=25.04% 00:41:55.315 cpu : usr=93.12%, sys=6.46%, ctx=10, majf=0, minf=9 00:41:55.315 IO depths : 1=0.1%, 2=3.3%, 4=68.4%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 issued rwts: total=10200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:55.315 filename1: (groupid=0, jobs=1): err= 0: pid=1700480: Tue Jun 11 14:09:47 2024 00:41:55.315 read: IOPS=1978, BW=15.5MiB/s (16.2MB/s)(77.3MiB/5002msec) 00:41:55.315 slat (nsec): min=7664, max=34187, avg=11112.68, stdev=3207.94 00:41:55.315 clat (usec): min=1416, max=45082, avg=4012.20, stdev=1272.67 00:41:55.315 lat (usec): min=1425, max=45102, avg=4023.31, stdev=1272.62 00:41:55.315 clat percentiles (usec): 00:41:55.315 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3720], 00:41:55.315 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 3982], 00:41:55.315 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 4948], 00:41:55.315 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 7439], 99.95th=[44827], 00:41:55.315 | 99.99th=[44827] 00:41:55.315 bw ( KiB/s): min=14332, max=16752, per=24.88%, avg=15822.00, stdev=649.91, samples=10 00:41:55.315 iops : min= 1791, max= 2094, avg=1977.70, stdev=81.37, samples=10 00:41:55.315 lat (msec) : 2=0.02%, 4=69.92%, 10=29.97%, 50=0.08% 00:41:55.315 cpu : usr=93.06%, sys=6.56%, ctx=9, majf=0, minf=9 00:41:55.315 IO depths : 1=0.1%, 2=1.3%, 4=71.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 issued rwts: total=9895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:55.315 filename1: (groupid=0, jobs=1): err= 0: pid=1700482: Tue Jun 11 14:09:47 2024 00:41:55.315 read: IOPS=1944, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5002msec) 00:41:55.315 slat (nsec): min=7008, max=31334, avg=11302.77, stdev=3187.89 00:41:55.315 clat (usec): min=1220, max=6946, avg=4083.60, stdev=672.83 00:41:55.315 lat (usec): min=1228, max=6960, avg=4094.90, stdev=672.54 00:41:55.315 clat percentiles (usec): 00:41:55.315 | 1.00th=[ 2442], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3687], 00:41:55.315 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:41:55.315 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 5080], 95.00th=[ 5669], 00:41:55.315 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 6652], 99.95th=[ 6718], 00:41:55.315 | 99.99th=[ 6915] 00:41:55.315 bw ( KiB/s): min=14592, max=16432, per=24.46%, avg=15552.00, stdev=616.13, samples=10 00:41:55.315 iops : min= 1824, max= 2054, avg=1944.00, stdev=77.02, samples=10 00:41:55.315 lat (msec) : 2=0.23%, 4=62.35%, 10=37.43% 00:41:55.315 cpu : usr=93.02%, sys=6.62%, ctx=9, majf=0, minf=9 00:41:55.315 IO depths : 1=0.1%, 2=0.9%, 4=69.4%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.315 issued rwts: total=9728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:55.315 00:41:55.315 Run status group 0 (all jobs): 00:41:55.315 READ: bw=62.1MiB/s (65.1MB/s), 15.2MiB/s-15.9MiB/s (15.9MB/s-16.7MB/s), io=311MiB (326MB), run=5002-5005msec 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 00:41:55.315 real 0m24.215s 00:41:55.315 user 4m59.276s 00:41:55.315 sys 0m9.373s 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 ************************************ 00:41:55.315 END TEST fio_dif_rand_params 00:41:55.315 ************************************ 00:41:55.315 14:09:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:55.315 14:09:47 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:55.315 14:09:47 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 ************************************ 00:41:55.315 START TEST fio_dif_digest 00:41:55.315 ************************************ 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 bdev_null0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:55.315 [2024-06-11 14:09:47.929182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:55.315 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:55.316 { 00:41:55.316 "params": { 00:41:55.316 "name": "Nvme$subsystem", 00:41:55.316 "trtype": "$TEST_TRANSPORT", 00:41:55.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:55.316 "adrfam": "ipv4", 00:41:55.316 "trsvcid": "$NVMF_PORT", 00:41:55.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:55.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:55.316 "hdgst": ${hdgst:-false}, 00:41:55.316 "ddgst": ${ddgst:-false} 00:41:55.316 }, 00:41:55.316 "method": "bdev_nvme_attach_controller" 00:41:55.316 } 00:41:55.316 EOF 00:41:55.316 )") 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:55.316 "params": { 00:41:55.316 "name": "Nvme0", 00:41:55.316 "trtype": "tcp", 00:41:55.316 "traddr": "10.0.0.2", 00:41:55.316 "adrfam": "ipv4", 00:41:55.316 "trsvcid": "4420", 00:41:55.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:55.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:55.316 "hdgst": true, 00:41:55.316 "ddgst": true 00:41:55.316 }, 00:41:55.316 "method": "bdev_nvme_attach_controller" 00:41:55.316 }' 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:55.316 14:09:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:55.316 14:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:55.316 14:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:55.316 14:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:55.316 14:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.576 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:55.576 ... 00:41:55.576 fio-3.35 00:41:55.576 Starting 3 threads 00:41:55.576 EAL: No free 2048 kB hugepages reported on node 1 00:42:07.780 00:42:07.780 filename0: (groupid=0, jobs=1): err= 0: pid=1701666: Tue Jun 11 14:09:58 2024 00:42:07.780 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(266MiB/10047msec) 00:42:07.780 slat (nsec): min=8586, max=25572, avg=14014.19, stdev=2217.84 00:42:07.780 clat (usec): min=8618, max=52491, avg=14147.53, stdev=1991.80 00:42:07.780 lat (usec): min=8634, max=52506, avg=14161.54, stdev=1991.99 00:42:07.780 clat percentiles (usec): 00:42:07.780 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[11600], 20.00th=[13173], 00:42:07.780 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:42:07.780 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:42:07.780 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[49546], 00:42:07.780 | 99.99th=[52691] 00:42:07.780 bw ( KiB/s): min=26112, max=28928, per=34.68%, avg=27161.60, stdev=796.23, samples=20 00:42:07.780 iops : min= 204, max= 226, avg=212.20, stdev= 6.22, samples=20 00:42:07.780 lat (msec) : 10=3.06%, 20=96.85%, 50=0.05%, 100=0.05% 00:42:07.780 cpu : usr=90.65%, sys=8.99%, ctx=14, majf=0, minf=117 00:42:07.780 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.780 filename0: (groupid=0, jobs=1): err= 0: pid=1701667: Tue Jun 11 14:09:58 2024 00:42:07.780 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10049msec) 00:42:07.780 slat (nsec): min=8603, max=41349, avg=14314.58, stdev=2426.66 00:42:07.780 clat (usec): min=8512, max=57404, avg=14250.26, stdev=3936.83 00:42:07.780 lat (usec): min=8528, max=57415, avg=14264.58, stdev=3936.85 00:42:07.780 clat percentiles (usec): 00:42:07.780 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11994], 20.00th=[13042], 00:42:07.780 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:42:07.780 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[16057], 00:42:07.780 | 99.00th=[17957], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:42:07.780 | 99.99th=[57410] 00:42:07.780 bw ( KiB/s): min=24576, max=29696, per=34.43%, avg=26969.60, stdev=1350.23, samples=20 00:42:07.780 iops : min= 192, max= 232, avg=210.70, stdev=10.55, samples=20 00:42:07.780 lat (msec) : 10=2.75%, 20=96.40%, 50=0.05%, 100=0.81% 00:42:07.780 cpu : usr=90.80%, sys=8.84%, ctx=22, majf=0, minf=162 00:42:07.780 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.780 filename0: (groupid=0, jobs=1): err= 0: pid=1701668: Tue Jun 11 14:09:58 2024 00:42:07.780 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(239MiB/10006msec) 00:42:07.780 slat (nsec): min=8641, max=25385, avg=14293.51, stdev=2323.26 00:42:07.780 clat (usec): min=8963, max=96822, avg=15664.74, stdev=6556.23 00:42:07.780 lat (usec): min=8972, max=96837, avg=15679.04, stdev=6556.19 00:42:07.780 clat percentiles (usec): 00:42:07.780 | 1.00th=[ 9896], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:42:07.780 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14877], 60.00th=[15139], 00:42:07.780 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:42:07.780 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58459], 99.95th=[96994], 00:42:07.780 | 99.99th=[96994] 00:42:07.780 bw ( KiB/s): min=19456, max=27392, per=31.23%, avg=24460.80, stdev=1856.24, samples=20 00:42:07.780 iops : min= 152, max= 214, avg=191.10, stdev=14.50, samples=20 00:42:07.780 lat (msec) : 10=1.04%, 20=96.60%, 50=0.05%, 100=2.30% 00:42:07.780 cpu : usr=91.41%, sys=8.23%, ctx=15, majf=0, minf=100 00:42:07.780 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:07.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:07.780 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:07.780 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:07.780 00:42:07.780 Run status group 0 (all jobs): 00:42:07.781 READ: bw=76.5MiB/s (80.2MB/s), 23.9MiB/s-26.4MiB/s (25.1MB/s-27.7MB/s), io=769MiB (806MB), run=10006-10049msec 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.781 00:42:07.781 real 0m11.208s 00:42:07.781 user 0m38.268s 00:42:07.781 sys 0m2.973s 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:07.781 14:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:07.781 ************************************ 00:42:07.781 END TEST fio_dif_digest 00:42:07.781 ************************************ 00:42:07.781 14:09:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:07.781 14:09:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:07.781 rmmod nvme_tcp 00:42:07.781 rmmod nvme_fabrics 00:42:07.781 rmmod nvme_keyring 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1692241 ']' 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1692241 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1692241 ']' 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1692241 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1692241 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1692241' 00:42:07.781 killing process with pid 1692241 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1692241 00:42:07.781 14:09:59 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1692241 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:07.781 14:09:59 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:09.684 Waiting for block devices as requested 00:42:09.684 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:09.684 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:09.684 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:09.684 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:09.942 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:09.942 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:09.942 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:10.201 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:10.201 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:10.201 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:10.460 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:10.460 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:10.460 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:10.718 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:10.718 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:10.718 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:10.977 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:42:10.977 14:10:03 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:10.977 14:10:03 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:10.977 14:10:03 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:10.977 14:10:03 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:10.977 14:10:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.977 14:10:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:10.977 14:10:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.511 14:10:05 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:13.511 00:42:13.511 real 1m16.512s 00:42:13.511 user 7m27.240s 00:42:13.511 sys 0m30.305s 00:42:13.511 14:10:05 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:13.511 14:10:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:13.511 ************************************ 00:42:13.511 END TEST nvmf_dif 00:42:13.511 ************************************ 00:42:13.511 14:10:05 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:13.511 14:10:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:13.511 14:10:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:13.511 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:42:13.511 ************************************ 00:42:13.511 START TEST nvmf_abort_qd_sizes 00:42:13.511 ************************************ 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:13.511 * Looking for test storage... 00:42:13.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:13.511 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:42:13.512 14:10:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:20.080 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:20.080 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:20.080 Found net devices under 0000:af:00.0: cvl_0_0 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:20.080 Found net devices under 0000:af:00.1: cvl_0_1 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:20.080 14:10:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:20.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:20.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:42:20.338 00:42:20.338 --- 10.0.0.2 ping statistics --- 00:42:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.338 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:20.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:20.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:42:20.338 00:42:20.338 --- 10.0.0.1 ping statistics --- 00:42:20.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:20.338 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:42:20.338 14:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:23.622 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:23.622 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:25.039 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1709953 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1709953 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1709953 ']' 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:25.039 14:10:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:25.039 [2024-06-11 14:10:17.927256] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:42:25.039 [2024-06-11 14:10:17.927313] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:25.298 EAL: No free 2048 kB hugepages reported on node 1 00:42:25.298 [2024-06-11 14:10:18.035263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:25.298 [2024-06-11 14:10:18.120296] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:25.298 [2024-06-11 14:10:18.120344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:25.298 [2024-06-11 14:10:18.120358] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:25.298 [2024-06-11 14:10:18.120370] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:25.298 [2024-06-11 14:10:18.120380] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:25.298 [2024-06-11 14:10:18.120440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.298 [2024-06-11 14:10:18.120539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:42:25.298 [2024-06-11 14:10:18.121104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:42:25.298 [2024-06-11 14:10:18.121107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:26.235 14:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:26.235 ************************************ 00:42:26.235 START TEST spdk_target_abort 00:42:26.235 ************************************ 00:42:26.235 14:10:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:42:26.235 14:10:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:26.235 14:10:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:42:26.235 14:10:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.235 14:10:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.525 spdk_targetn1 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.525 [2024-06-11 14:10:21.799334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:29.525 [2024-06-11 14:10:21.832118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:29.525 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:29.526 14:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:29.526 EAL: No free 2048 kB hugepages reported on node 1 00:42:32.816 Initializing NVMe Controllers 00:42:32.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:32.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:32.816 Initialization complete. Launching workers. 00:42:32.816 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13090, failed: 0 00:42:32.816 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1445, failed to submit 11645 00:42:32.816 success 831, unsuccess 614, failed 0 00:42:32.816 14:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:32.816 14:10:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:32.816 EAL: No free 2048 kB hugepages reported on node 1 00:42:36.105 Initializing NVMe Controllers 00:42:36.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:36.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:36.105 Initialization complete. Launching workers. 00:42:36.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8574, failed: 0 00:42:36.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1284, failed to submit 7290 00:42:36.105 success 282, unsuccess 1002, failed 0 00:42:36.105 14:10:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:36.105 14:10:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:36.105 EAL: No free 2048 kB hugepages reported on node 1 00:42:39.397 Initializing NVMe Controllers 00:42:39.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:39.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:39.397 Initialization complete. Launching workers. 00:42:39.397 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37690, failed: 0 00:42:39.397 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2713, failed to submit 34977 00:42:39.397 success 563, unsuccess 2150, failed 0 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:39.397 14:10:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1709953 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1709953 ']' 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1709953 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1709953 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1709953' 00:42:40.771 killing process with pid 1709953 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1709953 00:42:40.771 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1709953 00:42:41.030 00:42:41.030 real 0m14.903s 00:42:41.030 user 0m59.202s 00:42:41.030 sys 0m2.747s 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:41.030 ************************************ 00:42:41.030 END TEST spdk_target_abort 00:42:41.030 ************************************ 00:42:41.030 14:10:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:41.030 14:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:41.030 14:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:41.030 14:10:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:41.030 ************************************ 00:42:41.030 START TEST kernel_target_abort 00:42:41.030 ************************************ 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:41.030 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:41.031 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:41.031 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:42:41.031 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:42:41.031 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:42:41.290 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:41.290 14:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:44.581 Waiting for block devices as requested 00:42:44.581 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:44.581 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:44.581 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:44.581 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:44.841 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:44.841 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:45.100 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:45.100 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:45.100 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:45.100 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:45.359 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:45.359 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:45.359 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:45.618 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:45.618 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:45.618 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:45.878 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:45.878 No valid GPT data, bailing 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:42:45.878 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:42:46.137 00:42:46.137 Discovery Log Number of Records 2, Generation counter 2 00:42:46.137 =====Discovery Log Entry 0====== 00:42:46.137 trtype: tcp 00:42:46.137 adrfam: ipv4 00:42:46.137 subtype: current discovery subsystem 00:42:46.137 treq: not specified, sq flow control disable supported 00:42:46.137 portid: 1 00:42:46.137 trsvcid: 4420 00:42:46.137 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:46.137 traddr: 10.0.0.1 00:42:46.137 eflags: none 00:42:46.137 sectype: none 00:42:46.137 =====Discovery Log Entry 1====== 00:42:46.137 trtype: tcp 00:42:46.137 adrfam: ipv4 00:42:46.137 subtype: nvme subsystem 00:42:46.137 treq: not specified, sq flow control disable supported 00:42:46.137 portid: 1 00:42:46.137 trsvcid: 4420 00:42:46.137 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:46.137 traddr: 10.0.0.1 00:42:46.137 eflags: none 00:42:46.137 sectype: none 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:46.137 14:10:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:46.137 EAL: No free 2048 kB hugepages reported on node 1 00:42:49.426 Initializing NVMe Controllers 00:42:49.426 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:49.426 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:49.426 Initialization complete. Launching workers. 00:42:49.426 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52189, failed: 0 00:42:49.426 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 52189, failed to submit 0 00:42:49.426 success 0, unsuccess 52189, failed 0 00:42:49.426 14:10:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:49.426 14:10:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:49.426 EAL: No free 2048 kB hugepages reported on node 1 00:42:52.790 Initializing NVMe Controllers 00:42:52.790 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:52.790 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:52.790 Initialization complete. Launching workers. 00:42:52.790 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89400, failed: 0 00:42:52.790 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22502, failed to submit 66898 00:42:52.790 success 0, unsuccess 22502, failed 0 00:42:52.790 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:52.790 14:10:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:52.790 EAL: No free 2048 kB hugepages reported on node 1 00:42:55.324 Initializing NVMe Controllers 00:42:55.324 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:55.324 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:55.324 Initialization complete. Launching workers. 00:42:55.324 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85448, failed: 0 00:42:55.324 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21370, failed to submit 64078 00:42:55.324 success 0, unsuccess 21370, failed 0 00:42:55.324 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:55.324 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:55.324 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:55.583 14:10:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:58.873 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:42:58.873 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:00.255 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:43:00.255 00:43:00.255 real 0m19.092s 00:43:00.255 user 0m8.035s 00:43:00.255 sys 0m5.834s 00:43:00.255 14:10:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:00.255 14:10:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:00.255 ************************************ 00:43:00.255 END TEST kernel_target_abort 00:43:00.255 ************************************ 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:00.255 rmmod nvme_tcp 00:43:00.255 rmmod nvme_fabrics 00:43:00.255 rmmod nvme_keyring 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1709953 ']' 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1709953 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1709953 ']' 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1709953 00:43:00.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1709953) - No such process 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1709953 is not found' 00:43:00.255 Process with pid 1709953 is not found 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:43:00.255 14:10:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:03.547 Waiting for block devices as requested 00:43:03.547 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:03.547 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:03.807 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:03.807 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:03.807 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:04.066 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:04.066 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:04.066 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:04.325 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:04.325 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:04.325 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:04.325 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:04.584 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:04.584 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:04.584 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:04.844 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:04.844 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:04.844 14:10:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.381 14:10:59 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:07.381 00:43:07.381 real 0m53.790s 00:43:07.381 user 1m11.968s 00:43:07.381 sys 0m18.790s 00:43:07.381 14:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:07.381 14:10:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:07.381 ************************************ 00:43:07.381 END TEST nvmf_abort_qd_sizes 00:43:07.381 ************************************ 00:43:07.381 14:10:59 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:07.381 14:10:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:07.381 14:10:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:07.381 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:43:07.381 ************************************ 00:43:07.381 START TEST keyring_file 00:43:07.381 ************************************ 00:43:07.381 14:10:59 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:07.381 * Looking for test storage... 00:43:07.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:07.381 14:10:59 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:07.381 14:10:59 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:07.381 14:10:59 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:07.381 14:11:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:07.381 14:11:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:07.381 14:11:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:07.381 14:11:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.381 14:11:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.381 14:11:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.381 14:11:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:07.381 14:11:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:07.381 14:11:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:07.381 14:11:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TQgqatD8Aw 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TQgqatD8Aw 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TQgqatD8Aw 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.TQgqatD8Aw 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pWK1frCIsM 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:07.382 14:11:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pWK1frCIsM 00:43:07.382 14:11:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pWK1frCIsM 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pWK1frCIsM 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=1719373 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:07.382 14:11:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1719373 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1719373 ']' 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:07.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:07.382 14:11:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:07.382 [2024-06-11 14:11:00.177234] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:43:07.382 [2024-06-11 14:11:00.177290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719373 ] 00:43:07.382 EAL: No free 2048 kB hugepages reported on node 1 00:43:07.382 [2024-06-11 14:11:00.267637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.641 [2024-06-11 14:11:00.355684] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.210 14:11:01 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:08.210 14:11:01 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:43:08.210 14:11:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:08.210 14:11:01 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:08.210 14:11:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.210 [2024-06-11 14:11:01.083987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:08.210 null0 00:43:08.210 [2024-06-11 14:11:01.116041] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:08.210 [2024-06-11 14:11:01.116378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:08.469 [2024-06-11 14:11:01.124064] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:08.469 14:11:01 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.469 [2024-06-11 14:11:01.136086] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:08.469 request: 00:43:08.469 { 00:43:08.469 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:08.469 "secure_channel": false, 00:43:08.469 "listen_address": { 00:43:08.469 "trtype": "tcp", 00:43:08.469 "traddr": "127.0.0.1", 00:43:08.469 "trsvcid": "4420" 00:43:08.469 }, 00:43:08.469 "method": "nvmf_subsystem_add_listener", 00:43:08.469 "req_id": 1 00:43:08.469 } 00:43:08.469 Got JSON-RPC error response 00:43:08.469 response: 00:43:08.469 { 00:43:08.469 "code": -32602, 00:43:08.469 "message": "Invalid parameters" 00:43:08.469 } 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:08.469 14:11:01 keyring_file -- keyring/file.sh@46 -- # bperfpid=1719407 00:43:08.469 14:11:01 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1719407 /var/tmp/bperf.sock 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1719407 ']' 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:08.469 14:11:01 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:08.470 14:11:01 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:08.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:08.470 14:11:01 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:08.470 14:11:01 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:08.470 14:11:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.470 [2024-06-11 14:11:01.191960] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:43:08.470 [2024-06-11 14:11:01.192024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719407 ] 00:43:08.470 EAL: No free 2048 kB hugepages reported on node 1 00:43:08.470 [2024-06-11 14:11:01.283986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.470 [2024-06-11 14:11:01.371140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.405 14:11:02 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:09.405 14:11:02 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:43:09.405 14:11:02 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:09.405 14:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:09.663 14:11:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pWK1frCIsM 00:43:09.663 14:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pWK1frCIsM 00:43:09.663 14:11:02 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:43:09.663 14:11:02 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:43:09.663 14:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.663 14:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:09.663 14:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.922 14:11:02 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.TQgqatD8Aw == \/\t\m\p\/\t\m\p\.\T\Q\g\q\a\t\D\8\A\w ]] 00:43:09.922 14:11:02 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:43:09.922 14:11:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:09.922 14:11:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.922 14:11:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.922 14:11:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:10.182 14:11:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pWK1frCIsM == \/\t\m\p\/\t\m\p\.\p\W\K\1\f\r\C\I\s\M ]] 00:43:10.182 14:11:03 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:43:10.182 14:11:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:10.182 14:11:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.182 14:11:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.182 14:11:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:10.182 14:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.441 14:11:03 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:43:10.441 14:11:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:43:10.441 14:11:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:10.441 14:11:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.441 14:11:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.441 14:11:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:10.441 14:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.700 14:11:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:10.700 14:11:03 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.700 14:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.959 [2024-06-11 14:11:03.687451] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:10.959 nvme0n1 00:43:10.959 14:11:03 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:43:10.959 14:11:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:10.959 14:11:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.959 14:11:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.959 14:11:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.959 14:11:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.219 14:11:04 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:43:11.219 14:11:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:43:11.219 14:11:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:11.219 14:11:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.219 14:11:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.219 14:11:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:11.219 14:11:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.479 14:11:04 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:43:11.479 14:11:04 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:11.479 Running I/O for 1 seconds... 00:43:12.858 00:43:12.858 Latency(us) 00:43:12.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:12.858 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:12.858 nvme0n1 : 1.01 9953.24 38.88 0.00 0.00 12810.71 6763.32 26214.40 00:43:12.858 =================================================================================================================== 00:43:12.858 Total : 9953.24 38.88 0.00 0.00 12810.71 6763.32 26214.40 00:43:12.858 0 00:43:12.858 14:11:05 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:12.858 14:11:05 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.858 14:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:13.117 14:11:05 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:43:13.117 14:11:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:43:13.117 14:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:13.117 14:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:13.118 14:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.118 14:11:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.118 14:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:13.377 14:11:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:13.377 14:11:06 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:13.377 14:11:06 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:13.377 14:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:13.692 [2024-06-11 14:11:06.294388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:13.692 [2024-06-11 14:11:06.295047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbffb0 (107): Transport endpoint is not connected 00:43:13.692 [2024-06-11 14:11:06.296041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbffb0 (9): Bad file descriptor 00:43:13.692 [2024-06-11 14:11:06.297041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:13.692 [2024-06-11 14:11:06.297057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:13.692 [2024-06-11 14:11:06.297069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:13.692 request: 00:43:13.692 { 00:43:13.692 "name": "nvme0", 00:43:13.692 "trtype": "tcp", 00:43:13.692 "traddr": "127.0.0.1", 00:43:13.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.692 "adrfam": "ipv4", 00:43:13.692 "trsvcid": "4420", 00:43:13.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.692 "psk": "key1", 00:43:13.692 "method": "bdev_nvme_attach_controller", 00:43:13.692 "req_id": 1 00:43:13.692 } 00:43:13.692 Got JSON-RPC error response 00:43:13.692 response: 00:43:13.692 { 00:43:13.692 "code": -5, 00:43:13.692 "message": "Input/output error" 00:43:13.692 } 00:43:13.692 14:11:06 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:13.692 14:11:06 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:13.692 14:11:06 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:13.692 14:11:06 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:13.692 14:11:06 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:13.692 14:11:06 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:43:13.692 14:11:06 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:13.692 14:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.976 14:11:06 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:13.976 14:11:06 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:43:13.976 14:11:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:14.235 14:11:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:43:14.235 14:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:14.495 14:11:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:43:14.495 14:11:07 keyring_file -- keyring/file.sh@77 -- # jq length 00:43:14.495 14:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.754 14:11:07 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:43:14.755 14:11:07 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.TQgqatD8Aw 00:43:14.755 14:11:07 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:14.755 14:11:07 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:14.755 14:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:14.755 [2024-06-11 14:11:07.659755] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TQgqatD8Aw': 0100660 00:43:14.755 [2024-06-11 14:11:07.659789] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:15.014 request: 00:43:15.014 { 00:43:15.014 "name": "key0", 00:43:15.014 "path": "/tmp/tmp.TQgqatD8Aw", 00:43:15.014 "method": "keyring_file_add_key", 00:43:15.014 "req_id": 1 00:43:15.014 } 00:43:15.014 Got JSON-RPC error response 00:43:15.014 response: 00:43:15.014 { 00:43:15.014 "code": -1, 00:43:15.014 "message": "Operation not permitted" 00:43:15.014 } 00:43:15.014 14:11:07 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:15.014 14:11:07 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:15.014 14:11:07 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:15.014 14:11:07 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:15.014 14:11:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.TQgqatD8Aw 00:43:15.014 14:11:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TQgqatD8Aw 00:43:15.014 14:11:07 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.TQgqatD8Aw 00:43:15.014 14:11:07 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:15.014 14:11:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:15.274 14:11:08 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:43:15.274 14:11:08 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:15.274 14:11:08 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.274 14:11:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.534 [2024-06-11 14:11:08.293441] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.TQgqatD8Aw': No such file or directory 00:43:15.534 [2024-06-11 14:11:08.293470] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:15.534 [2024-06-11 14:11:08.293506] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:15.534 [2024-06-11 14:11:08.293518] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:15.534 [2024-06-11 14:11:08.293529] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:15.534 request: 00:43:15.534 { 00:43:15.534 "name": "nvme0", 00:43:15.534 "trtype": "tcp", 00:43:15.534 "traddr": "127.0.0.1", 00:43:15.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:15.534 "adrfam": "ipv4", 00:43:15.534 "trsvcid": "4420", 00:43:15.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:15.534 "psk": "key0", 00:43:15.534 "method": "bdev_nvme_attach_controller", 00:43:15.534 "req_id": 1 00:43:15.534 } 00:43:15.534 Got JSON-RPC error response 00:43:15.534 response: 00:43:15.534 { 00:43:15.534 "code": -19, 00:43:15.534 "message": "No such device" 00:43:15.534 } 00:43:15.534 14:11:08 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:15.534 14:11:08 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:15.534 14:11:08 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:15.534 14:11:08 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:15.534 14:11:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:43:15.534 14:11:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:15.793 14:11:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rhXZ57sMk2 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:15.793 14:11:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rhXZ57sMk2 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rhXZ57sMk2 00:43:15.793 14:11:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.rhXZ57sMk2 00:43:15.793 14:11:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rhXZ57sMk2 00:43:15.793 14:11:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rhXZ57sMk2 00:43:16.052 14:11:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:16.052 14:11:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:16.312 nvme0n1 00:43:16.312 14:11:09 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:43:16.312 14:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:16.312 14:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.312 14:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.312 14:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.312 14:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:16.571 14:11:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:43:16.571 14:11:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:43:16.571 14:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:16.830 14:11:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:43:16.830 14:11:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:43:16.830 14:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.830 14:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.830 14:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:17.092 14:11:09 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:43:17.092 14:11:09 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:43:17.092 14:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:17.092 14:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:17.092 14:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:17.092 14:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:17.092 14:11:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:17.351 14:11:10 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:43:17.351 14:11:10 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:17.351 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:17.610 14:11:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:43:17.610 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:17.610 14:11:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:43:17.610 14:11:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:43:17.610 14:11:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rhXZ57sMk2 00:43:17.610 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rhXZ57sMk2 00:43:17.870 14:11:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pWK1frCIsM 00:43:17.870 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pWK1frCIsM 00:43:18.129 14:11:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:18.129 14:11:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:18.389 nvme0n1 00:43:18.389 14:11:11 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:43:18.389 14:11:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:18.648 14:11:11 keyring_file -- keyring/file.sh@112 -- # config='{ 00:43:18.648 "subsystems": [ 00:43:18.648 { 00:43:18.648 "subsystem": "keyring", 00:43:18.648 "config": [ 00:43:18.648 { 00:43:18.648 "method": "keyring_file_add_key", 00:43:18.648 "params": { 00:43:18.648 "name": "key0", 00:43:18.648 "path": "/tmp/tmp.rhXZ57sMk2" 00:43:18.648 } 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "method": "keyring_file_add_key", 00:43:18.648 "params": { 00:43:18.648 "name": "key1", 00:43:18.648 "path": "/tmp/tmp.pWK1frCIsM" 00:43:18.648 } 00:43:18.648 } 00:43:18.648 ] 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "subsystem": "iobuf", 00:43:18.648 "config": [ 00:43:18.648 { 00:43:18.648 "method": "iobuf_set_options", 00:43:18.648 "params": { 00:43:18.648 "small_pool_count": 8192, 00:43:18.648 "large_pool_count": 1024, 00:43:18.648 "small_bufsize": 8192, 00:43:18.648 "large_bufsize": 135168 00:43:18.648 } 00:43:18.648 } 00:43:18.648 ] 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "subsystem": "sock", 00:43:18.648 "config": [ 00:43:18.648 { 00:43:18.648 "method": "sock_set_default_impl", 00:43:18.648 "params": { 00:43:18.648 "impl_name": "posix" 00:43:18.648 } 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "method": "sock_impl_set_options", 00:43:18.648 "params": { 00:43:18.648 "impl_name": "ssl", 00:43:18.648 "recv_buf_size": 4096, 00:43:18.648 "send_buf_size": 4096, 00:43:18.648 "enable_recv_pipe": true, 00:43:18.648 "enable_quickack": false, 00:43:18.648 "enable_placement_id": 0, 00:43:18.648 "enable_zerocopy_send_server": true, 00:43:18.648 "enable_zerocopy_send_client": false, 00:43:18.648 "zerocopy_threshold": 0, 00:43:18.648 "tls_version": 0, 00:43:18.648 "enable_ktls": false 00:43:18.648 } 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "method": "sock_impl_set_options", 00:43:18.648 "params": { 00:43:18.648 "impl_name": "posix", 00:43:18.648 "recv_buf_size": 2097152, 00:43:18.648 "send_buf_size": 2097152, 00:43:18.648 "enable_recv_pipe": true, 00:43:18.648 "enable_quickack": false, 00:43:18.648 "enable_placement_id": 0, 00:43:18.648 "enable_zerocopy_send_server": true, 00:43:18.648 "enable_zerocopy_send_client": false, 00:43:18.648 "zerocopy_threshold": 0, 00:43:18.648 "tls_version": 0, 00:43:18.648 "enable_ktls": false 00:43:18.648 } 00:43:18.648 } 00:43:18.648 ] 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "subsystem": "vmd", 00:43:18.648 "config": [] 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "subsystem": "accel", 00:43:18.648 "config": [ 00:43:18.648 { 00:43:18.648 "method": "accel_set_options", 00:43:18.648 "params": { 00:43:18.648 "small_cache_size": 128, 00:43:18.648 "large_cache_size": 16, 00:43:18.648 "task_count": 2048, 00:43:18.648 "sequence_count": 2048, 00:43:18.648 "buf_count": 2048 00:43:18.648 } 00:43:18.648 } 00:43:18.648 ] 00:43:18.648 }, 00:43:18.648 { 00:43:18.648 "subsystem": "bdev", 00:43:18.648 "config": [ 00:43:18.648 { 00:43:18.648 "method": "bdev_set_options", 00:43:18.648 "params": { 00:43:18.648 "bdev_io_pool_size": 65535, 00:43:18.649 "bdev_io_cache_size": 256, 00:43:18.649 "bdev_auto_examine": true, 00:43:18.649 "iobuf_small_cache_size": 128, 00:43:18.649 "iobuf_large_cache_size": 16 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_raid_set_options", 00:43:18.649 "params": { 00:43:18.649 "process_window_size_kb": 1024 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_iscsi_set_options", 00:43:18.649 "params": { 00:43:18.649 "timeout_sec": 30 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_nvme_set_options", 00:43:18.649 "params": { 00:43:18.649 "action_on_timeout": "none", 00:43:18.649 "timeout_us": 0, 00:43:18.649 "timeout_admin_us": 0, 00:43:18.649 "keep_alive_timeout_ms": 10000, 00:43:18.649 "arbitration_burst": 0, 00:43:18.649 "low_priority_weight": 0, 00:43:18.649 "medium_priority_weight": 0, 00:43:18.649 "high_priority_weight": 0, 00:43:18.649 "nvme_adminq_poll_period_us": 10000, 00:43:18.649 "nvme_ioq_poll_period_us": 0, 00:43:18.649 "io_queue_requests": 512, 00:43:18.649 "delay_cmd_submit": true, 00:43:18.649 "transport_retry_count": 4, 00:43:18.649 "bdev_retry_count": 3, 00:43:18.649 "transport_ack_timeout": 0, 00:43:18.649 "ctrlr_loss_timeout_sec": 0, 00:43:18.649 "reconnect_delay_sec": 0, 00:43:18.649 "fast_io_fail_timeout_sec": 0, 00:43:18.649 "disable_auto_failback": false, 00:43:18.649 "generate_uuids": false, 00:43:18.649 "transport_tos": 0, 00:43:18.649 "nvme_error_stat": false, 00:43:18.649 "rdma_srq_size": 0, 00:43:18.649 "io_path_stat": false, 00:43:18.649 "allow_accel_sequence": false, 00:43:18.649 "rdma_max_cq_size": 0, 00:43:18.649 "rdma_cm_event_timeout_ms": 0, 00:43:18.649 "dhchap_digests": [ 00:43:18.649 "sha256", 00:43:18.649 "sha384", 00:43:18.649 "sha512" 00:43:18.649 ], 00:43:18.649 "dhchap_dhgroups": [ 00:43:18.649 "null", 00:43:18.649 "ffdhe2048", 00:43:18.649 "ffdhe3072", 00:43:18.649 "ffdhe4096", 00:43:18.649 "ffdhe6144", 00:43:18.649 "ffdhe8192" 00:43:18.649 ] 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_nvme_attach_controller", 00:43:18.649 "params": { 00:43:18.649 "name": "nvme0", 00:43:18.649 "trtype": "TCP", 00:43:18.649 "adrfam": "IPv4", 00:43:18.649 "traddr": "127.0.0.1", 00:43:18.649 "trsvcid": "4420", 00:43:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:18.649 "prchk_reftag": false, 00:43:18.649 "prchk_guard": false, 00:43:18.649 "ctrlr_loss_timeout_sec": 0, 00:43:18.649 "reconnect_delay_sec": 0, 00:43:18.649 "fast_io_fail_timeout_sec": 0, 00:43:18.649 "psk": "key0", 00:43:18.649 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:18.649 "hdgst": false, 00:43:18.649 "ddgst": false 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_nvme_set_hotplug", 00:43:18.649 "params": { 00:43:18.649 "period_us": 100000, 00:43:18.649 "enable": false 00:43:18.649 } 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "method": "bdev_wait_for_examine" 00:43:18.649 } 00:43:18.649 ] 00:43:18.649 }, 00:43:18.649 { 00:43:18.649 "subsystem": "nbd", 00:43:18.649 "config": [] 00:43:18.649 } 00:43:18.649 ] 00:43:18.649 }' 00:43:18.649 14:11:11 keyring_file -- keyring/file.sh@114 -- # killprocess 1719407 00:43:18.649 14:11:11 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1719407 ']' 00:43:18.649 14:11:11 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1719407 00:43:18.649 14:11:11 keyring_file -- common/autotest_common.sh@954 -- # uname 00:43:18.649 14:11:11 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:18.649 14:11:11 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1719407 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1719407' 00:43:18.909 killing process with pid 1719407 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@968 -- # kill 1719407 00:43:18.909 Received shutdown signal, test time was about 1.000000 seconds 00:43:18.909 00:43:18.909 Latency(us) 00:43:18.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.909 =================================================================================================================== 00:43:18.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@973 -- # wait 1719407 00:43:18.909 14:11:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=1721368 00:43:18.909 14:11:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1721368 /var/tmp/bperf.sock 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1721368 ']' 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:18.909 14:11:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:18.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:18.909 14:11:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:18.909 14:11:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:43:18.909 "subsystems": [ 00:43:18.909 { 00:43:18.909 "subsystem": "keyring", 00:43:18.909 "config": [ 00:43:18.909 { 00:43:18.909 "method": "keyring_file_add_key", 00:43:18.909 "params": { 00:43:18.909 "name": "key0", 00:43:18.909 "path": "/tmp/tmp.rhXZ57sMk2" 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "keyring_file_add_key", 00:43:18.909 "params": { 00:43:18.909 "name": "key1", 00:43:18.909 "path": "/tmp/tmp.pWK1frCIsM" 00:43:18.909 } 00:43:18.909 } 00:43:18.909 ] 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "subsystem": "iobuf", 00:43:18.909 "config": [ 00:43:18.909 { 00:43:18.909 "method": "iobuf_set_options", 00:43:18.909 "params": { 00:43:18.909 "small_pool_count": 8192, 00:43:18.909 "large_pool_count": 1024, 00:43:18.909 "small_bufsize": 8192, 00:43:18.909 "large_bufsize": 135168 00:43:18.909 } 00:43:18.909 } 00:43:18.909 ] 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "subsystem": "sock", 00:43:18.909 "config": [ 00:43:18.909 { 00:43:18.909 "method": "sock_set_default_impl", 00:43:18.909 "params": { 00:43:18.909 "impl_name": "posix" 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "sock_impl_set_options", 00:43:18.909 "params": { 00:43:18.909 "impl_name": "ssl", 00:43:18.909 "recv_buf_size": 4096, 00:43:18.909 "send_buf_size": 4096, 00:43:18.909 "enable_recv_pipe": true, 00:43:18.909 "enable_quickack": false, 00:43:18.909 "enable_placement_id": 0, 00:43:18.909 "enable_zerocopy_send_server": true, 00:43:18.909 "enable_zerocopy_send_client": false, 00:43:18.909 "zerocopy_threshold": 0, 00:43:18.909 "tls_version": 0, 00:43:18.909 "enable_ktls": false 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "sock_impl_set_options", 00:43:18.909 "params": { 00:43:18.909 "impl_name": "posix", 00:43:18.909 "recv_buf_size": 2097152, 00:43:18.909 "send_buf_size": 2097152, 00:43:18.909 "enable_recv_pipe": true, 00:43:18.909 "enable_quickack": false, 00:43:18.909 "enable_placement_id": 0, 00:43:18.909 "enable_zerocopy_send_server": true, 00:43:18.909 "enable_zerocopy_send_client": false, 00:43:18.909 "zerocopy_threshold": 0, 00:43:18.909 "tls_version": 0, 00:43:18.909 "enable_ktls": false 00:43:18.909 } 00:43:18.909 } 00:43:18.909 ] 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "subsystem": "vmd", 00:43:18.909 "config": [] 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "subsystem": "accel", 00:43:18.909 "config": [ 00:43:18.909 { 00:43:18.909 "method": "accel_set_options", 00:43:18.909 "params": { 00:43:18.909 "small_cache_size": 128, 00:43:18.909 "large_cache_size": 16, 00:43:18.909 "task_count": 2048, 00:43:18.909 "sequence_count": 2048, 00:43:18.909 "buf_count": 2048 00:43:18.909 } 00:43:18.909 } 00:43:18.909 ] 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "subsystem": "bdev", 00:43:18.909 "config": [ 00:43:18.909 { 00:43:18.909 "method": "bdev_set_options", 00:43:18.909 "params": { 00:43:18.909 "bdev_io_pool_size": 65535, 00:43:18.909 "bdev_io_cache_size": 256, 00:43:18.909 "bdev_auto_examine": true, 00:43:18.909 "iobuf_small_cache_size": 128, 00:43:18.909 "iobuf_large_cache_size": 16 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "bdev_raid_set_options", 00:43:18.909 "params": { 00:43:18.909 "process_window_size_kb": 1024 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "bdev_iscsi_set_options", 00:43:18.909 "params": { 00:43:18.909 "timeout_sec": 30 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "bdev_nvme_set_options", 00:43:18.909 "params": { 00:43:18.909 "action_on_timeout": "none", 00:43:18.909 "timeout_us": 0, 00:43:18.909 "timeout_admin_us": 0, 00:43:18.909 "keep_alive_timeout_ms": 10000, 00:43:18.909 "arbitration_burst": 0, 00:43:18.909 "low_priority_weight": 0, 00:43:18.909 "medium_priority_weight": 0, 00:43:18.909 "high_priority_weight": 0, 00:43:18.909 "nvme_adminq_poll_period_us": 10000, 00:43:18.909 "nvme_ioq_poll_period_us": 0, 00:43:18.909 "io_queue_requests": 512, 00:43:18.909 "delay_cmd_submit": true, 00:43:18.909 "transport_retry_count": 4, 00:43:18.909 "bdev_retry_count": 3, 00:43:18.909 "transport_ack_timeout": 0, 00:43:18.909 "ctrlr_loss_timeout_sec": 0, 00:43:18.909 "reconnect_delay_sec": 0, 00:43:18.909 "fast_io_fail_timeout_sec": 0, 00:43:18.909 "disable_auto_failback": false, 00:43:18.909 "generate_uuids": false, 00:43:18.909 "transport_tos": 0, 00:43:18.909 "nvme_error_stat": false, 00:43:18.909 "rdma_srq_size": 0, 00:43:18.909 "io_path_stat": false, 00:43:18.909 "allow_accel_sequence": false, 00:43:18.909 "rdma_max_cq_size": 0, 00:43:18.909 "rdma_cm_event_timeout_ms": 0, 00:43:18.909 "dhchap_digests": [ 00:43:18.909 "sha256", 00:43:18.909 "sha384", 00:43:18.909 "sha512" 00:43:18.909 ], 00:43:18.909 "dhchap_dhgroups": [ 00:43:18.909 "null", 00:43:18.909 "ffdhe2048", 00:43:18.909 "ffdhe3072", 00:43:18.909 "ffdhe4096", 00:43:18.909 "ffdhe6144", 00:43:18.909 "ffdhe8192" 00:43:18.909 ] 00:43:18.909 } 00:43:18.909 }, 00:43:18.909 { 00:43:18.909 "method": "bdev_nvme_attach_controller", 00:43:18.909 "params": { 00:43:18.909 "name": "nvme0", 00:43:18.909 "trtype": "TCP", 00:43:18.909 "adrfam": "IPv4", 00:43:18.909 "traddr": "127.0.0.1", 00:43:18.909 "trsvcid": "4420", 00:43:18.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:18.909 "prchk_reftag": false, 00:43:18.909 "prchk_guard": false, 00:43:18.909 "ctrlr_loss_timeout_sec": 0, 00:43:18.909 "reconnect_delay_sec": 0, 00:43:18.909 "fast_io_fail_timeout_sec": 0, 00:43:18.910 "psk": "key0", 00:43:18.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:18.910 "hdgst": false, 00:43:18.910 "ddgst": false 00:43:18.910 } 00:43:18.910 }, 00:43:18.910 { 00:43:18.910 "method": "bdev_nvme_set_hotplug", 00:43:18.910 "params": { 00:43:18.910 "period_us": 100000, 00:43:18.910 "enable": false 00:43:18.910 } 00:43:18.910 }, 00:43:18.910 { 00:43:18.910 "method": "bdev_wait_for_examine" 00:43:18.910 } 00:43:18.910 ] 00:43:18.910 }, 00:43:18.910 { 00:43:18.910 "subsystem": "nbd", 00:43:18.910 "config": [] 00:43:18.910 } 00:43:18.910 ] 00:43:18.910 }' 00:43:19.169 [2024-06-11 14:11:11.833222] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:43:19.169 [2024-06-11 14:11:11.833288] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721368 ] 00:43:19.169 EAL: No free 2048 kB hugepages reported on node 1 00:43:19.169 [2024-06-11 14:11:11.924768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:19.169 [2024-06-11 14:11:12.009954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:19.428 [2024-06-11 14:11:12.173557] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:19.996 14:11:12 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:19.996 14:11:12 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:43:19.996 14:11:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:43:19.996 14:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.996 14:11:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:43:20.256 14:11:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:43:20.256 14:11:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:43:20.256 14:11:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:20.256 14:11:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:20.256 14:11:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:20.256 14:11:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:20.256 14:11:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.515 14:11:13 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:20.515 14:11:13 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:43:20.515 14:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:20.515 14:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:20.515 14:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:20.515 14:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:20.515 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:43:20.775 14:11:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.rhXZ57sMk2 /tmp/tmp.pWK1frCIsM 00:43:20.775 14:11:13 keyring_file -- keyring/file.sh@20 -- # killprocess 1721368 00:43:20.775 14:11:13 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1721368 ']' 00:43:20.775 14:11:13 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1721368 00:43:20.775 14:11:13 keyring_file -- common/autotest_common.sh@954 -- # uname 00:43:20.775 14:11:13 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:21.034 14:11:13 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1721368 00:43:21.034 14:11:13 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:21.034 14:11:13 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:21.034 14:11:13 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1721368' 00:43:21.034 killing process with pid 1721368 00:43:21.034 14:11:13 keyring_file -- common/autotest_common.sh@968 -- # kill 1721368 00:43:21.034 Received shutdown signal, test time was about 1.000000 seconds 00:43:21.034 00:43:21.034 Latency(us) 00:43:21.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:21.034 =================================================================================================================== 00:43:21.034 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@973 -- # wait 1721368 00:43:21.035 14:11:13 keyring_file -- keyring/file.sh@21 -- # killprocess 1719373 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1719373 ']' 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1719373 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@954 -- # uname 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:21.035 14:11:13 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1719373 00:43:21.294 14:11:13 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:21.294 14:11:13 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:21.294 14:11:13 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1719373' 00:43:21.294 killing process with pid 1719373 00:43:21.294 14:11:13 keyring_file -- common/autotest_common.sh@968 -- # kill 1719373 00:43:21.294 [2024-06-11 14:11:13.988458] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:43:21.294 14:11:13 keyring_file -- common/autotest_common.sh@973 -- # wait 1719373 00:43:21.553 00:43:21.553 real 0m14.421s 00:43:21.553 user 0m34.377s 00:43:21.553 sys 0m3.888s 00:43:21.553 14:11:14 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:21.553 14:11:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:21.553 ************************************ 00:43:21.553 END TEST keyring_file 00:43:21.553 ************************************ 00:43:21.553 14:11:14 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:43:21.553 14:11:14 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:21.553 14:11:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:21.553 14:11:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:21.553 14:11:14 -- common/autotest_common.sh@10 -- # set +x 00:43:21.553 ************************************ 00:43:21.553 START TEST keyring_linux 00:43:21.553 ************************************ 00:43:21.553 14:11:14 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:21.813 * Looking for test storage... 00:43:21.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:21.813 14:11:14 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:21.813 14:11:14 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.813 14:11:14 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:21.813 14:11:14 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.813 14:11:14 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.813 14:11:14 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.813 14:11:14 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.813 14:11:14 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.814 14:11:14 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.814 14:11:14 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:21.814 14:11:14 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:21.814 /tmp/:spdk-test:key0 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:43:21.814 14:11:14 keyring_linux -- nvmf/common.sh@705 -- # python - 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:21.814 14:11:14 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:21.814 /tmp/:spdk-test:key1 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1721790 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1721790 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1721790 ']' 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:21.814 14:11:14 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:21.814 14:11:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:21.814 [2024-06-11 14:11:14.692253] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:43:21.814 [2024-06-11 14:11:14.692321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721790 ] 00:43:22.074 EAL: No free 2048 kB hugepages reported on node 1 00:43:22.074 [2024-06-11 14:11:14.794210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:22.074 [2024-06-11 14:11:14.877147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.642 14:11:15 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:22.642 14:11:15 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:43:22.642 14:11:15 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:22.642 14:11:15 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:22.642 14:11:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:22.642 [2024-06-11 14:11:15.521514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:22.642 null0 00:43:22.902 [2024-06-11 14:11:15.553576] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:22.902 [2024-06-11 14:11:15.553979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:22.902 14:11:15 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:22.902 600479268 00:43:22.902 14:11:15 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:22.902 1027373792 00:43:22.902 14:11:15 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1722034 00:43:22.902 14:11:15 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1722034 /var/tmp/bperf.sock 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1722034 ']' 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:22.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:22.902 14:11:15 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:22.902 14:11:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:22.902 [2024-06-11 14:11:15.626591] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:43:22.902 [2024-06-11 14:11:15.626651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722034 ] 00:43:22.902 EAL: No free 2048 kB hugepages reported on node 1 00:43:22.902 [2024-06-11 14:11:15.717036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:22.902 [2024-06-11 14:11:15.801903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:23.839 14:11:16 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:23.839 14:11:16 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:43:23.839 14:11:16 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:23.839 14:11:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:24.098 14:11:16 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:24.098 14:11:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:24.358 14:11:17 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:24.358 14:11:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:24.358 [2024-06-11 14:11:17.263904] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:24.617 nvme0n1 00:43:24.617 14:11:17 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:24.617 14:11:17 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:24.617 14:11:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:24.617 14:11:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:24.617 14:11:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:24.617 14:11:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:24.876 14:11:17 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:24.876 14:11:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:24.876 14:11:17 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:24.876 14:11:17 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:24.876 14:11:17 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:24.876 14:11:17 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:24.876 14:11:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:25.135 14:11:17 keyring_linux -- keyring/linux.sh@25 -- # sn=600479268 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 600479268 == \6\0\0\4\7\9\2\6\8 ]] 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 600479268 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:25.136 14:11:17 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:25.136 Running I/O for 1 seconds... 00:43:26.074 00:43:26.074 Latency(us) 00:43:26.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:26.074 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:26.074 nvme0n1 : 1.01 10494.28 40.99 0.00 0.00 12118.27 10223.62 23068.67 00:43:26.074 =================================================================================================================== 00:43:26.074 Total : 10494.28 40.99 0.00 0.00 12118.27 10223.62 23068.67 00:43:26.074 0 00:43:26.074 14:11:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:26.074 14:11:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:26.334 14:11:19 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:26.334 14:11:19 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:26.334 14:11:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:26.334 14:11:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:26.334 14:11:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:26.334 14:11:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.593 14:11:19 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:26.593 14:11:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:26.593 14:11:19 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:26.593 14:11:19 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:26.593 14:11:19 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:26.593 14:11:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:26.853 [2024-06-11 14:11:19.660322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:26.853 [2024-06-11 14:11:19.660846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1139780 (107): Transport endpoint is not connected 00:43:26.853 [2024-06-11 14:11:19.661838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1139780 (9): Bad file descriptor 00:43:26.853 [2024-06-11 14:11:19.662838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:26.853 [2024-06-11 14:11:19.662854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:26.853 [2024-06-11 14:11:19.662866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:26.853 request: 00:43:26.853 { 00:43:26.853 "name": "nvme0", 00:43:26.853 "trtype": "tcp", 00:43:26.853 "traddr": "127.0.0.1", 00:43:26.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:26.853 "adrfam": "ipv4", 00:43:26.853 "trsvcid": "4420", 00:43:26.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:26.853 "psk": ":spdk-test:key1", 00:43:26.853 "method": "bdev_nvme_attach_controller", 00:43:26.853 "req_id": 1 00:43:26.853 } 00:43:26.853 Got JSON-RPC error response 00:43:26.853 response: 00:43:26.853 { 00:43:26.853 "code": -5, 00:43:26.853 "message": "Input/output error" 00:43:26.853 } 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@33 -- # sn=600479268 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 600479268 00:43:26.853 1 links removed 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@33 -- # sn=1027373792 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1027373792 00:43:26.853 1 links removed 00:43:26.853 14:11:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1722034 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1722034 ']' 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1722034 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:26.853 14:11:19 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1722034 00:43:27.112 14:11:19 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:27.112 14:11:19 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:27.112 14:11:19 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1722034' 00:43:27.112 killing process with pid 1722034 00:43:27.112 14:11:19 keyring_linux -- common/autotest_common.sh@968 -- # kill 1722034 00:43:27.112 Received shutdown signal, test time was about 1.000000 seconds 00:43:27.112 00:43:27.112 Latency(us) 00:43:27.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:27.112 =================================================================================================================== 00:43:27.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:27.112 14:11:19 keyring_linux -- common/autotest_common.sh@973 -- # wait 1722034 00:43:27.113 14:11:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1721790 00:43:27.113 14:11:19 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1721790 ']' 00:43:27.113 14:11:19 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1721790 00:43:27.113 14:11:19 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:43:27.113 14:11:19 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:27.113 14:11:19 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1721790 00:43:27.113 14:11:20 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:27.113 14:11:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:27.113 14:11:20 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1721790' 00:43:27.113 killing process with pid 1721790 00:43:27.113 14:11:20 keyring_linux -- common/autotest_common.sh@968 -- # kill 1721790 00:43:27.113 14:11:20 keyring_linux -- common/autotest_common.sh@973 -- # wait 1721790 00:43:27.681 00:43:27.681 real 0m5.942s 00:43:27.681 user 0m10.698s 00:43:27.681 sys 0m1.916s 00:43:27.681 14:11:20 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:27.681 14:11:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:27.681 ************************************ 00:43:27.681 END TEST keyring_linux 00:43:27.681 ************************************ 00:43:27.681 14:11:20 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:43:27.681 14:11:20 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:43:27.681 14:11:20 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:43:27.681 14:11:20 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:43:27.681 14:11:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:43:27.681 14:11:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:27.681 14:11:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:27.681 14:11:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:27.681 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:43:27.681 14:11:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:27.681 14:11:20 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:43:27.681 14:11:20 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:43:27.681 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:43:34.291 INFO: APP EXITING 00:43:34.291 INFO: killing all VMs 00:43:34.291 INFO: killing vhost app 00:43:34.291 INFO: EXIT DONE 00:43:37.579 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:43:37.579 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:43:37.579 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:43:37.580 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:43:40.867 Cleaning 00:43:40.867 Removing: /var/run/dpdk/spdk0/config 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:40.867 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:40.867 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:40.867 Removing: /var/run/dpdk/spdk1/config 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:41.127 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:41.127 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:41.127 Removing: /var/run/dpdk/spdk1/mp_socket 00:43:41.127 Removing: /var/run/dpdk/spdk2/config 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:41.127 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:41.127 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:41.127 Removing: /var/run/dpdk/spdk3/config 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:41.127 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:41.127 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:41.127 Removing: /var/run/dpdk/spdk4/config 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:41.127 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:41.127 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:41.127 Removing: /dev/shm/bdev_svc_trace.1 00:43:41.127 Removing: /dev/shm/nvmf_trace.0 00:43:41.127 Removing: /dev/shm/spdk_tgt_trace.pid1227265 00:43:41.127 Removing: /var/run/dpdk/spdk0 00:43:41.127 Removing: /var/run/dpdk/spdk1 00:43:41.127 Removing: /var/run/dpdk/spdk2 00:43:41.127 Removing: /var/run/dpdk/spdk3 00:43:41.127 Removing: /var/run/dpdk/spdk4 00:43:41.127 Removing: /var/run/dpdk/spdk_pid1224776 00:43:41.127 Removing: /var/run/dpdk/spdk_pid1226044 00:43:41.127 Removing: /var/run/dpdk/spdk_pid1227265 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1227958 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1229047 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1229318 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1230194 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1230450 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1230816 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1232546 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1234298 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1234872 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1235214 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1235678 00:43:41.387 Removing: /var/run/dpdk/spdk_pid1236084 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1236281 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1236475 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1236762 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1237783 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1241025 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1241452 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1241864 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1241887 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1242454 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1242717 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1243157 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1243290 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1243606 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1243859 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1244151 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1244183 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1244795 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1245075 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1245407 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1245707 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1245738 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1246048 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1246329 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1246571 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1246816 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1247062 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1247294 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1247547 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1247796 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1248081 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1248366 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1248656 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1248935 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1249221 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1249507 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1249788 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1250073 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1250357 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1250649 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1250933 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1251218 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1251505 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1251576 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1251968 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1256000 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1341592 00:43:41.388 Removing: /var/run/dpdk/spdk_pid1346371 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1357295 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1363123 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1367370 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1368147 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1383210 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1383494 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1388035 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1394307 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1397164 00:43:41.647 Removing: /var/run/dpdk/spdk_pid1408712 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1418236 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1420063 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1420905 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1438932 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1443154 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1473746 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1478800 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1480393 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1482243 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1482362 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1482528 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1482609 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1483287 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1485526 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1486663 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1487242 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1489453 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1490238 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1491084 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1495407 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1501199 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1506321 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1544617 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1548731 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1554962 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1556305 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1557918 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1562576 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1567399 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1575343 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1575399 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1580209 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1580480 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1580741 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1581197 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1581271 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1583037 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1584743 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1586337 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1588041 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1589771 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1591363 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1597754 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1598406 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1600479 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1601633 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1609195 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1611881 00:43:41.648 Removing: /var/run/dpdk/spdk_pid1617733 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1623476 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1632842 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1640240 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1640258 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1660432 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1661085 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1661883 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1662424 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1663287 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1664079 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1664629 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1665184 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1669760 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1670003 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1676343 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1676660 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1678943 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1686979 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1687091 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1692571 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1695086 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1697096 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1698291 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1700326 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1701542 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1710710 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1711234 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1711768 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1714233 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1714766 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1715298 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1719373 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1719407 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1721368 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1721790 00:43:41.907 Removing: /var/run/dpdk/spdk_pid1722034 00:43:41.907 Clean 00:43:41.907 14:11:34 -- common/autotest_common.sh@1450 -- # return 0 00:43:41.907 14:11:34 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:41.907 14:11:34 -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:41.907 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:43:42.167 14:11:34 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:42.167 14:11:34 -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:42.167 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:43:42.167 14:11:34 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:42.167 14:11:34 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:42.167 14:11:34 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:42.167 14:11:34 -- spdk/autotest.sh@391 -- # hash lcov 00:43:42.167 14:11:34 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:42.167 14:11:34 -- spdk/autotest.sh@393 -- # hostname 00:43:42.167 14:11:34 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:42.167 geninfo: WARNING: invalid characters removed from testname! 00:44:08.720 14:12:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:12.010 14:12:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:14.539 14:12:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:16.440 14:12:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:18.974 14:12:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:20.941 14:12:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:22.845 14:12:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:23.103 14:12:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:23.103 14:12:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:23.103 14:12:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:23.103 14:12:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:23.103 14:12:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.103 14:12:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.103 14:12:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.103 14:12:15 -- paths/export.sh@5 -- $ export PATH 00:44:23.103 14:12:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.103 14:12:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:44:23.104 14:12:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:44:23.104 14:12:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718107935.XXXXXX 00:44:23.104 14:12:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718107935.nKiI6z 00:44:23.104 14:12:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:44:23.104 14:12:15 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:44:23.104 14:12:15 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:44:23.104 14:12:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:44:23.104 14:12:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:44:23.104 14:12:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:44:23.104 14:12:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:44:23.104 14:12:15 -- common/autotest_common.sh@10 -- $ set +x 00:44:23.104 14:12:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:44:23.104 14:12:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:44:23.104 14:12:15 -- pm/common@17 -- $ local monitor 00:44:23.104 14:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:23.104 14:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:23.104 14:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:23.104 14:12:15 -- pm/common@21 -- $ date +%s 00:44:23.104 14:12:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:23.104 14:12:15 -- pm/common@21 -- $ date +%s 00:44:23.104 14:12:15 -- pm/common@25 -- $ sleep 1 00:44:23.104 14:12:15 -- pm/common@21 -- $ date +%s 00:44:23.104 14:12:15 -- pm/common@21 -- $ date +%s 00:44:23.104 14:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107935 00:44:23.104 14:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107935 00:44:23.104 14:12:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107935 00:44:23.104 14:12:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107935 00:44:23.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107935_collect-vmstat.pm.log 00:44:23.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107935_collect-cpu-load.pm.log 00:44:23.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107935_collect-cpu-temp.pm.log 00:44:23.104 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107935_collect-bmc-pm.bmc.pm.log 00:44:24.041 14:12:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:44:24.041 14:12:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:44:24.041 14:12:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:24.041 14:12:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:44:24.041 14:12:16 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:24.041 14:12:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:44:24.041 14:12:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:44:24.041 14:12:16 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:24.041 14:12:16 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:44:24.041 14:12:16 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:24.041 14:12:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:44:24.041 14:12:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:44:24.041 14:12:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:44:24.041 14:12:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:44:24.041 14:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.041 14:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:44:24.041 14:12:16 -- pm/common@44 -- $ pid=1738280 00:44:24.041 14:12:16 -- pm/common@50 -- $ kill -TERM 1738280 00:44:24.041 14:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.042 14:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:44:24.042 14:12:16 -- pm/common@44 -- $ pid=1738282 00:44:24.042 14:12:16 -- pm/common@50 -- $ kill -TERM 1738282 00:44:24.042 14:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.042 14:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:44:24.042 14:12:16 -- pm/common@44 -- $ pid=1738283 00:44:24.042 14:12:16 -- pm/common@50 -- $ kill -TERM 1738283 00:44:24.042 14:12:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:24.042 14:12:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:44:24.042 14:12:16 -- pm/common@44 -- $ pid=1738305 00:44:24.042 14:12:16 -- pm/common@50 -- $ sudo -E kill -TERM 1738305 00:44:24.300 + [[ -n 1112348 ]] 00:44:24.300 + sudo kill 1112348 00:44:24.308 [Pipeline] } 00:44:24.324 [Pipeline] // stage 00:44:24.328 [Pipeline] } 00:44:24.341 [Pipeline] // timeout 00:44:24.345 [Pipeline] } 00:44:24.359 [Pipeline] // catchError 00:44:24.365 [Pipeline] } 00:44:24.381 [Pipeline] // wrap 00:44:24.385 [Pipeline] } 00:44:24.399 [Pipeline] // catchError 00:44:24.407 [Pipeline] stage 00:44:24.409 [Pipeline] { (Epilogue) 00:44:24.425 [Pipeline] catchError 00:44:24.427 [Pipeline] { 00:44:24.442 [Pipeline] echo 00:44:24.443 Cleanup processes 00:44:24.448 [Pipeline] sh 00:44:24.729 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:24.729 1738388 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:44:24.729 1738729 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:24.742 [Pipeline] sh 00:44:25.024 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:25.024 ++ grep -v 'sudo pgrep' 00:44:25.024 ++ awk '{print $1}' 00:44:25.024 + sudo kill -9 1738388 00:44:25.038 [Pipeline] sh 00:44:25.322 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:25.322 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:44:33.528 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:44:40.111 [Pipeline] sh 00:44:40.396 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:40.396 Artifacts sizes are good 00:44:40.412 [Pipeline] archiveArtifacts 00:44:40.420 Archiving artifacts 00:44:40.634 [Pipeline] sh 00:44:40.919 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:40.934 [Pipeline] cleanWs 00:44:40.944 [WS-CLEANUP] Deleting project workspace... 00:44:40.944 [WS-CLEANUP] Deferred wipeout is used... 00:44:40.951 [WS-CLEANUP] done 00:44:40.953 [Pipeline] } 00:44:40.976 [Pipeline] // catchError 00:44:40.988 [Pipeline] sh 00:44:41.270 + logger -p user.info -t JENKINS-CI 00:44:41.280 [Pipeline] } 00:44:41.298 [Pipeline] // stage 00:44:41.305 [Pipeline] } 00:44:41.323 [Pipeline] // node 00:44:41.330 [Pipeline] End of Pipeline 00:44:41.369 Finished: SUCCESS